The Decibel - The race to regulate artificial intelligence
Episode Date: May 23, 2023Whether you like it or not, AI is everywhere. It unlocks your phone through facial recognition, it manages spam emails in your inbox and it creates realistic photos of the Pope in a puffer jacket. Wit...h rapid developments in technology infiltrating our everyday lives, it’s a race for governments to figure out how to regulate it. And Ottawa might be playing catch up.Joe Castaldo is with The Globe’s Report on Business. Today, he explains the federal government’s plan to regulate AI for consumers and data protection, and how this proposed legislation compares to others worldwide.And here's a link to our survey!Questions? Comments? Ideas? Email us at thedecibel@globeandmail.com
Transcript
Discussion (0)
I feel like if you spend any time online these days, you cannot escape AI-generated content.
Joe Costaldo is with The Globe's report on business.
And recently, he's been covering what's been going on with artificial intelligence.
I'm sure a lot of us heard Heart on My Sleeve, which is this song where somebody cloned drake's and the weekend's voice
and had like a viral hit on their hands at least until it was pulled from streaming services for copyright concerns.
Before that, there was Pope in a Coat, right?
This AI generated image of Pope Francis in his righteous puffy coat.
So while there's been some fun around how these tools can be used, there's also a lot of questions. That stuff seems kind of trivial, but I think the questions that it raises are really interesting,
right? Like, how are we going to discern what's real and what's not? Like,
what kind of compensation is owed to artists?
As artificial intelligence becomes more embedded in our lives, governments around the world,
including here in Canada, are trying to figure out how to deal with it. So today, we'll talk to Joe about the race
to regulate AI. I'm Maina Karaman-Wilms, and this is The Decibel from The Globe and Mail. Joe, thanks for being here today.
Thanks for the invite.
Over the past few months, you've been doing a lot of reporting on this industry, especially with chat GPT being out there now.
People are familiar with that.
They played around with that.
It seems like things are moving really, really fast.
I mean, is AI moving as fast as it seems like it is?
It's interesting. I think in some ways, yes. I think chat GPT really opened up people's eyes
to where the technology is at. But like the technology underlying chat GPT has been in
development for a long time. But chat GPT was just really easy to use. And it just like caught fire,
which, you know, prompted other companies like Google to be like, Oh, my God, we have to do something here. Like we've been developing this technology for a long time, like we have to get
in on this. And it just created these these real competitive pressures. And now like every company
is thinking about,
okay, how are we going to integrate AI?
Like how are we going to do this so we're not left behind?
And it's interesting.
We've also seen some worries coming out about this
from prominent tech leaders.
So recently, Sam Altman, who's the CEO of OpenAI,
which created ChatGPT, he talked about his concerns.
And then there was also that AI pioneer researcher who
quit Google so that he could speak freely about what he called the dangers of the technology.
And in late March, there was also an open letter signed by a number of prominent tech leaders,
including Elon Musk and Apple co-founder Steve Wozniak, and they were calling for an immediate pause on AI development.
So Joe, let me ask you about that letter specifically. What are they so worried about?
The risks that they cited in that letter, I guess they're kind of all over the place. Like,
there's some near-term stuff. And then there's really long-term existential risks. So they talk about job loss, like is AI going to automate all of our jobs?
They raise the notion of losing control of civilization due to like, you know, super intelligent AI.
And, you know, are we going to be enslaved by robots and live in some Terminator-esque nightmare?
Well, I think the response to that letter I found really interesting.
These long term hypothetical risks about losing control of civilization and things like that.
It was just complete hyperbole.
There's a risk of ignoring some of the real harms and risks that exist today.
Like what? decisions? Do they end up discriminating against women and people of color like AI systems have
been known to do that? Things like that are more real today and could theoretically be dealt with
through some kind of regulation. So it sounds a little bit like, you know, looking at these kind
of the bigger existential risks, you know, we're talking about like terminators taking over,
that can almost be a bit of a distraction from letting us focus on the real issues that we're seeing these days. Well, let's talk a little bit about near-term
risks or nearer-term risks. Here in Canada, the government is trying to address some of these
worries around the power of AI. In June of last year, actually, the federal government put forward
a bill looking in part to regulate artificial intelligence. So Joe, let's look
at this bill. What exactly are we talking about here? Yeah, so this is Bill C-27. And so there's
three components to it. The first to deal with consumer privacy and data protection.
But then there's this third component all about AI. That's called the Artificial Intelligence and Data Act. And I guess at a very high level,
what it's trying to do is eventually write some regulations around how to develop and put AI
systems out into the world in a safe, responsible manner and protect individual Canadians from harms. There's a couple of other parts to that. It would also establish an AI commissioner who would be responsible for oversight and enforcing, and it sets out some financial penalties that companies and individuals who violate the law would have to pay. Okay. Okay. So let's focus on this part of
Bill C-27 that deals with artificial intelligence. So this is what's called ADA, the Artificial
Intelligence and Data Act. Practically, Joe, what would this exactly do? It sounds like there's
ideas about regulation, but on a practical level, what would we see from this?
So practically, it wouldn't do much. Depending on where you fall, this is either a
feature of ADA and the whole point of ADA, or it's the bill's fatal flaw. There are no specifics
in ADA. What it would do is it would allow, say, the Ministry of Industry, Science, and Economic science and economic development, to write regulations later. So all of the important
details are still to be determined. Like at this point, it's not even clear what specifically would
be regulated. The government has talked about high risk AI systems, but we don't really know
what that means. This sounds pretty unusual to have a bill with like, it seems really vague, essentially,
that we don't exactly know what it would regulate.
Yeah. And so there is a reason for that. The government's perspective on this is
AI technology develops really fast, as we've seen in the past few months. So it's challenging to
regulate AI systems. Researchers talk about emergent capabilities, which essentially means
a larger AI system might be able to do things that you couldn't predict based on the smaller
version of that AI system. So there are a lot of unknowns. And in order to keep on
top of that, you need to be flexible. So if you write a law now that has a bunch of specifics in
it in terms of what's regulated, what the requirements are, it could be out of date
very quickly, depending on how technology develops. And then in order to respond to that, you need to amend the
law. And that's a whole, you know, political parliamentary process that takes time and is
uncertain. Okay, so essentially, by keeping things vague, it's allowing a little bit of flexibility
to adapt to fast changing things. But the downside, it sounds like is the fact that it's vague,
there's not a lot of clear, direct things that we know are going to be regulated by this bill.
Yes, and so one of the major criticisms you will hear about that
is the whole process has not been very democratic.
It leaves too much power, basically, in the hands of the ministry
to write these really important regulations
without necessarily a lot of public
input or public consultation. When ADA came out last year, it kind of took people by surprise.
People who follow this stuff didn't anticipate artificial intelligence would be part of Bill C-27.
So some people would argue a better approach would be, you know, we should have
had conversations about artificial intelligence public consultations first before it ever
emerged. So like people like Jim Balsillie, you know, former BlackBerry co-CEO, who's very
vocal on these issues, you know, says ADA is deeply flawed because of that. A law professor I spoke to said, you know,
it's essentially a blank check for the government to write regulations as they see fit. Michelle
Rempel-Garner, a conservative MP, you know, has said that, you know, it takes this whole process
behind closed doors and out of the public eye. And I guess I just wonder, like, it sounds like
politicians in Canada are going to have a big hand in what actually happens here with this doors and out of the public eye. And I guess I just wonder, like, it sounds like politicians
in Canada are going to have a big hand in what actually happens here with this regulation. Like,
are they equipped to handle an industry that is so fast moving? Like, do they even know enough
about how things work here? Yeah, I mean, that's a legitimate issue. To go back to Michelle Rample
Garner, that's something she's said, as well, that, you know, parliamentarians need to increase their level of knowledge about this stuff. And so, you know,
she's been working with Senator Colin Deacon about getting kind of a working group together
across parties. When Ada was written, we hadn't seen this explosion in like generative AI. So
things have changed in short order.
We'll be right back.
So it sounds like there are some criticisms, certainly, of the way this is being done.
What about the other side of it, though, Joe? Are there any experts that like what Canada is proposing here with this regulation? Absolutely, yeah. I think the flexibility argument carries a lot of weight for some people.
The EU, for example, has the EU AI Act, which came out in 2021. It's still not law yet, and it
does have more specifics. I was looking at the PDF the other day, and it runs like more than 100 pages of very small, dense text.
Ada is, I don't know, maybe 10 pages.
You know, one of the criticisms there is it has been bogged down with amendments.
So it's not moving quickly necessarily there.
That would be it's basically being held up by government process.
Essentially, if you introduce an amendment, it has to go through that whole process in order to become necessarily there. That would be, it's basically being held up by government process, essentially. If you introduce an amendment, it has to go through that whole process in order to
become law there. Yeah. Or if you have a bunch of specifics, people are going to argue over those
specifics. So one of the arguments in favor of ADA that I heard is, well, you know, we don't
necessarily want to get into that situation in Canada. I don't know if Canadian Parliament
can deal with that. So ADA is the way to go.
What are other places doing?
A lot of governments around the world are trying to figure this out, and there's no one way to go about it.
So the EU, as we mentioned before, it does have a lot more specifics in it.
The AI Act there talks specifically about chat chatbots and providers should be required to
disclose that this is an AI you're talking to, not a human being. In the US, it seems like
existing regulators like the Federal Trade Commission are sort of independently figuring
out how to deal with AI within their jurisdiction. But the Biden administration is taking some steps. Earlier in April, they put out a public call for a comment on AI systems like chat GPT
and raised the question of whether potentially risky AI technologies need to go through some kind of certification process
before being released to the public.
China, of course, has its own approach. A lot of it, well, some of it has to do with censorship and chatbots
and what they can and cannot say.
So I don't know that there's a lot of lessons for us to pull from there.
But kind of what's interesting is they're making a push for truthfulness,
whatever that means according to the Chinese government,
but accuracy as well. That's been a big concern with these language models.
I mean, this is how it's supposed to work too. But what about, I guess, what about how this is
actually implemented, Joe, right? Like, this has got to be pretty hard to regulate. AI is not
really something defined by borders. What are the challenges that would be there?
Yeah, I mean, so there is a lot of talk
about aligning our standards with the US and with the EU, because we live in a global economy.
And again, like, it is tough to regulate unknowns. Which again, the Canadian government would argue,
well, that's why this approach is the right one, because we can respond quickly.
What do experts say about how, I guess, how the government should go about tackling such
a broad industry? Are there any ideas about, you know, what are some of the key things that we
should have in this legislation, or as we're going about this, some of the key things we
should be thinking about?
Yeah, at a high level, I mean, there's agreement around things like transparency.
So AI developers should be able to explain how the technology works.
Like you often hear experts talk about AI as a black box, right?
It contains mysteries that we cannot possibly understand.
Like we don't always know why it makes decisions that it does.
And it's important to know that.
Yeah, like right now, we don't even know what chat GPT is trained on, right? That's something that hasn't been made public. in order to train that technology.
And that could be worth knowing because there's always the potential
for these language models to be racists, right?
To engage in stereotypes
because the internet contains a lot of that.
Like when you pull from Reddit or 4chan
or something like that,
that could have an impact.
So it sounds like Canada's, you know, there's legislation in the works to try to get this
to happen. What's going to happen next with this bill, Joe?
So toward the end of April, there was a vote in the House to move Bill C-27 to committee. So the
industry committee is going to look at it. And then, of course, it has to pass the Senate.
So it's following the normal procedures, but the government doesn't anticipate it will be in force before 2025.
Is there an argument on the other side of regulation?
Like, do we maybe not need to regulate things as strictly?
Like, should we maybe just leave this up to individuals and companies?
Is that an argument at all? I don't know that you'd find much support for that idea. That being said, when we talk about AI
regulation and how there's no AI-specific regulation, it might give people the impression
that this is a totally unregulated space, and that's not really true. Like we have existing laws that, you know, apply. Like if, you know, somebody uses an
AI model to clone somebody's voice and scam somebody over the phone, like that's fraud,
like you'd be prosecuted under criminal law. You know, I guess there's a question of what
responsibility does the AI company bear? Or I guess a better example is the privacy commissioner
announced an investigation into chat GPT overGPT over its use of data.
So the Privacy Commissioner didn't have to wait for AI regulation to do that, right?
There are existing tools.
Just before I let you go here, Joe, I mean, we've talked about a lot of stuff here.
It seems, frankly, it seems really complicated.
There's a lot to think about.
And given that the government has said this bill trying to address AI won't actually come into effect until 2025 and the pace that everything is growing at, I mean, is this kind of an impossible task?
The sense I'm getting is it's not impossible, but it's challenging.
I think it's likely we'll always be playing catch up in some ways with technology, AI in particular, just given
the speed we've seen recently. And if you look at, again, privacy and consumer protection and
social media platforms and how they use or abuse data, we're only getting more robust
legislation now in Bill C-27. So that's been a long time coming. I wouldn't expect AI to be any different
in that regard. Like we will get something in Canada, but I think unexpected things are still
going to happen with AI. Joe, thank you so much for joining me today. Thanks for having me.
That's it for today. I'm Mainika Raman-Wilms. Our interns are Wafa El-Rayis, Andrew Hines, and Tracy Thomas.
Our producers are Madeline White, Cheryl Sutherland, and Rachel Levy-McLaughlin.
David Crosby edits the show.
Adrian Chung is our senior producer, and Angela Pachenza is our executive editor.
Thanks so much for listening, and I'll talk to you tomorrow.