Today, Explained - EU vs. AI
Episode Date: December 18, 2023The EU has advanced first-of-its-kind AI regulation. The Verge’s Jess Weatherbed tells us whether it will make a difference, and Columbia University’s Anu Bradford explains the Brussels effect. Th...is episode was produced by Amanda Lewellyn, edited by Matt Collette, fact-checked by Laura Bullard, engineered by Patrick Boyd, and hosted by Sean Rameswaram. Transcript at vox.com/todayexplained Support Today, Explained by making a financial contribution to Vox! bit.ly/givepodcasts Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
AI this, AI that.
AI is changing the way we learn.
No wonder ChatGPT has been called the end of high school English,
the end of the college essay,
and the return of the handwritten in-class essay.
It's changing the way we work.
I have a friend who's building an app on his own.
Normally he would have hired a developer.
Instead he's just using ChatGPT because it can code
and it can correct its own code.
Pretty soon, it'll be changing the way we do just about everything.
Alexa, order trash liners.
And yet, there have been just about zero rules
governing artificial intelligence until now.
I think we have made history today.
Europe's gone and done some regulating on our latest technology
as Europe often does. People are using words like landmark and first of its kind. You might even hear
us use those words on this episode of Today Explained. BetMGM, authorized gaming partner
of the NBA, has your back all season long.
From tip-off to the final buzzer, you're always taken care of with a sportsbook born in Vegas.
That's a feeling you can only get with BetMGM.
And no matter your team, your favorite player, or your style, there's something every NBA fan will love about BetMGM.
Download the app today and discover why BetMGM is your basketball home for the season.
Raise your game to the next level
this year with BetMGM,
a sportsbook worth a slam dunk,
an authorized gaming partner of the NBA.
BetMGM.com for terms and conditions.
Must be 19 years of age or older to wager.
Ontario only.
Please play responsibly.
If you have any questions or concerns
about your gambling or someone close to you,
please contact Connex Ontario at 1-866-531-2600 to speak to an advisor free of charge.
BetMGM operates pursuant to an operating agreement with iGaming Ontario.
You're listening to Today Explained.
Jess Weatherbed is a news writer at The Verge.
And for a while now, she's been covering the EU's new AI Act.
And was this process easy breezy, fancy free, or was it complicated?
Complicated is probably one way of putting it.
We've got kind of a two-year timeline roughly to work with here.
They proposed the AI Act back in 2021.
Artificial intelligence must serve people,
and therefore artificial intelligence must always comply with people's rights.
AI, as we know it at that point, was vastly different to what it is now.
They were working
with systems that were designed to do a very specific purpose. So it was a little bit easier
to try and categorize how risky those systems are going to be for people living within the EU.
There's been a hell of a lot of interruptions and a lot of disruption caused by various things,
but the biggest one I would say in the last two years was systems like ChatGPT.
It's a software that has gone viral this week.
It's a chatbot that uses natural language processing
to generate responses to user inputs.
Now suddenly rather than these AI systems
that are only built to do a specific job,
you've got these foundation models or large language models
that are literally designed to do pretty much anything
you can put their mind to it.
They can generate images, they can write text,
they can apparently write code, they can, you know.
And it presented so many new options that weren't anything that could be covered by the original scope of the AI Act.
We have seen a change. We have seen a change in the mood, in the discourse and now there is no big deal, no big contraposition on the need to find
a sensible regulation for generative AI. It had to be reworked several times over. It's one of
the largest contributors to the delays in actually getting it approved. Between that and discussions
around how it can be applied to things like national security
and law enforcement were the two most highly contested points of what's been argued and
debated about over the last year.
Okay, so the EU has been talking about regulating AI for a good while, but things got real this
December.
It kind of came to a head around 10 days ago.
That was when the provisional agreements were made.
But prior to that, it was about 36 hours of just solid debate.
I think that you have all heard and probably agree that AI is too important not to regulate,
and it's too important to badly regulate.
A good regulation that we all agree on as soon as possible must be a common objective.
At one point, they had been in Brussels having one prolonged conversation trying to iron out the differences and compromises for about 22 hours at least. So you've got a lot of tired, cranky, potentially sleep-deprived lawmakers,
policymakers all crammed into a building
trying to finalize a set of blueprint AI regulations
that are apparently going to be, you know,
set the example for every other global regulators.
And yet they're in a room cramming like college students
before a finals exam.
But they got it done.
It was worth it because they got it done?
Yeah, well, they got it provisionally agreed. So this is kind of the first big step to it being
completely done. What did they get done? What are these landmark EU regulations on artificial
intelligence? We don't know for sure yet. The full text isn't going to be available for several
weeks. What they tend to do
with these is that they will make compromises based on principles. Now they actually have to
go away and jazz it up with the legal language that they want to have things adhered to at that
point. And we built a risk-based approach that identifies high-risk AI use cases that needs to be more regulated, that needs to be checked on data
used to train it. A lot of it seems to be following the same kind of framework that
they proposed years ago, which is that they wanted a risk-based tiered system that you could
categorize different AI systems with. And by taking certain attributes of what they could
apply to, you can then go, great, these
are low risk.
These ones are high risk, which means that they're going to have to be subjected to all
these investigations.
They're going to have to tell us what they're doing with their data, how much power they
use.
It just makes it a little bit easier rather than having a complete free-for-all or wild
west of not even having a distinction for what AI is, which was where they were in 2021.
And we also identified uses that we think should be outrightly banned, like the use
of biometric identification in public spaces on real time to avoid, by banning it, the
risk of mass surveillance. And we also ban the predictive
policing, social scoring, emotional recognition for students and workers, because we think that
in these cases, there is no added value and more risks than benefits.
I remember Joe Biden sort of announcing his oversight of AI and all the AI guys came to the White House and everyone was
smiling and it felt very performative. Were the AI guys happy about this regulation or were they
less into it? Initially, I haven't seen anything in terms of response of the finalized provisional
agreement that we've just seen. But early on when they were talking about blanketing all of these
systems as high risk because they just didn't know what to do with them, they were very
unhappy, to say the least. So companies like OpenAI, Microsoft, and Google, these companies
all lobbied the EU to basically come forward and go, but we're fine to self-regulate. We don't need
to be held accountable. And it's not fair that you're going to assume that our products are
automatically bad just because they could be bad.
We've seen what happens to countries that try to over-regulate tech.
I don't think that's what we want here.
It wasn't only the AI companies that were a little bit unhappy about it.
There was also some members of the EU that were not particularly happy about it because they're hoping that homegrown AI companies can have the room to innovate. So we know that France, Germany and Italy at some point back in November turned around
and went, how about we just don't regulate general AI at all and let them do their own
thing?
And I think quite understandably, a lot of the rest of the EU turned around and went,
uh, no.
Yeah.
And then that led on to a couple of extra weeks of highly contested arguments about
what they should be regulating, what should be involved in this agreement, two years after it was first proposed.
Well, speaking of delays, when do these rules go into effect within six months anything that's outright
prohibited which includes for example a lot of this was to in terms of kind of like national
security and biometric surveillance that will come into effect within six months to try and get that
enforced as quickly as possible the obligations that are going to be impacted against general AI
so the stuff that's probably going to impact companies like OpenAI, that should come into effect 12 months after the law itself has taken effect. So that
stuff is probably going to take into effect in 2025. Anything that's going to impact these big
companies. And then the remainder of all the rest of the gubbins, everything else that they need to
tidy up should be in place by 2026. So like the full force of this law could take another two
years, like two and a half years
potentially to actually do anything. Are we going to have a different set of problems by that point?
When you consider how much stuff changed between them proposing this law in 2021,
and then just open AI kind of appearing on the scene within a year later going, hey,
have you heard of generative AI that can do anything? We could be looking at a completely different landscape by then. And
unfortunately, I'm not in the business of being like a seer or a prophet or anything. So I can't
turn around and say, absolutely, we're going to have a different environment to deal with.
They've done the best with this system that they've got, the tiered system, to make sure that
anything that's introduced might be able to just be categorized within it already
they know that if it's going to be an ai model that's turning around saying that it can do several
different jobs it's going to be classified as a general ai and it's going to be beholden to a
certain level of restrictions same as if they look at it and go well yeah this is a i don't know a
customer facing chatbot that's going to go here like they've made it so that anything that's
developed within the future hypothetically can be slotted into the existing rules.
But as far as I'm aware, that was also the plan in 2021.
And look what happened.
I don't think it's the ironclad set of rules that anyone was hoping for at this point in time, especially when a lot of the stuff that needs to be regulated, people are actually saying needs to kind of be a discussed prompt, like now. And it's taken, I would say, probably an embarrassingly long time
for a bunch of politicians and AI providers to come together
and make an agreement on what should constitute a safe development going forward.
It's very much like the dog meme of just sitting there with a room on fire going,
This is fine. I'm okay with the events that are unfolding currently. It's good
that they're in. There's still a lot of work to be done and there's still some changes that could
be made. But for now, at least, it's a concrete agreement. There's no more in-squabbling about
how we're going to work the tiered systems or what's going to be included in there. The framework
is in place and they can at least progress now, rather than the stagnation that we've had for the past two years.
Jess Weatherbed at The Verge, read her at theverge.com.
When we're back on Today Explained, why Europe is always first with this stuff.
First.
Support for Today Explained comes from Aura. Aura believes that sharing pictures is a great way to keep up with family, and Aura says it's never been easier thanks to
their digital picture frames. They were named the number one digital photo frame by Wirecutter.
Aura frames make it easy to share unlimited photos and videos directly from your phone to the frame.
When you give an Aura frame as a gift, you can personalize it.
You can preload it with a thoughtful message, maybe your favorite photos.
Our colleague Andrew tried an OuraFrame for himself.
So setup was super simple.
In my case, we were celebrating my grandmother's birthday.
And she's very fortunate.
She's got 10 grandkids.
And so we wanted to surprise her with the AuraFrame,
and because she's a little bit older,
it was just easier for us to source all the images together
and have them uploaded to the frame itself.
And because we're all connected over text message,
it was just so easy to send a link to everybody.
You can save on the perfect gift by visiting AuraFrames.com to get $35 off Aura's best-selling Carvermat frames with promo code
EXPLAINED at checkout. That's A-U-R-A-Frames.com, promo code EXPLAINED. This deal is exclusive to
listeners and available just in time for the holidays. Terms and conditions do apply.
Support for today Explained comes from Ramp.
If you're a finance manager, you're probably used to having to toggle between multiple disjointed tools just to keep track of everything.
And sometimes that means there's limited visibility on business spend.
I don't know what any of that means, but Ramp might be able to help.
Ramp is a corporate card and spend management software designed to help you save time and put money back in your back pocket. Ramp's accounting
software automatically collects receipts, categorizes your expenses in real time. You can
say goodbye to manual expense reports. You will never have to chase down a receipt again. You can
customize spending limits and restrictions so your employees are empowered to purchase what your
business needs and you can have peace of mind. And now you can get $250 when you join RAMP. You
go to ramp.com slash explained, ramp.com slash explained, ramp.com slash explained. Cards are
issued by Sutton Bank, a member of the FDIC, and terms and conditions do apply.
Today Explained, Sean Ramos-Firm joined by Columbia University Professor Anu Bradford.
She's the author of a book called The Brussels Effect,
How the European Union Rules the World. So the Brussels Effect refers to European unions' unilateral ability to regulate the global marketplace. So the EU is one of the largest and
wealthiest consumer markets in the world, and there are very few global companies that can
afford not to trade in the EU. So as the price for accessing the European market, they need to follow European regulations.
But often it is in their business interest to basically extend those regulations across their global production
or their global conduct because they want to avoid the cost of complying with multiple different regulatory regimes.
Is the EU just always trying to be first, or is there something special going on here?
How are they able to move relatively, comparatively quickly on artificial intelligence? I don't think the EU necessarily wants to be first, but it just has the ability to regulate. If you compare to the United States,
there is not similar polarization
in the European legislature
as there is in Congress.
So the political divides
are not preventing legislation
from moving forward.
There's also much less lobbying,
or the lobbying is less effective in the EU.
So the US legislative process is very much shaped by tech companies
that have influence over lawmaking.
And the EU just does not operate quite the same way.
So the civil society also has access to legislators
and often then offsets or balances the message
that the lawmakers are hearing from the tech companies.
So basically what you're saying is in Europe, you all have functional government.
That is one very good way to put it. There is still a functioning government in the EU.
There's a legislator that is capable of passing legislation, and that makes a big difference.
So can you give us a sense of the history here,
how much the EU has managed to accomplish in terms of tech regulation because of this
functional government ideology on technology? So I would go back to early 1990s.
That's when the US really stepped back from regulation.
Because the internet has such explosive potential for prosperity,
it should be a global free trade zone.
Up until then, the US had often been setting the rules that had global impact. But then the
US really adopted this market-driven dogma that was very anti-regulation.
So the US took the lead in promoting this deregulation agenda. It should be a place where
government makes every effort first, as the vice president said, not to stand in the way. And the EU stepped in and filled the vacuum because at that
very point the EU was ramping up its own efforts to integrate the common European market and that
meant it needed to harmonize regulations so that we remove the barriers from within the member
states for training within the EU. So the EU started proactively building a regulatory state,
not for the purpose of ruling the world,
but for the purpose of making Europe
an integrated, strong trading area.
We will strengthen the impact of this community
through the ongoing implementation
of common foreign and security policies.
So then the EU started focusing its regulatory efforts on digital economy.
The European Union has approved rules to force big technology firms such as Google, Facebook and Twitter to remove illegal content.
The European Union has hit tech giant Meta with a record-breaking fine of over a billion dollars for defying privacy rules. And the gap between what the EU was producing and what the US was failing to do
in the regulatory space just became larger and larger. But initially, it was really the US's
decision to say that, look, we trust the markets and the EU making philosophically a very different
rule. And I think the inadvertent effect, the unintended consequence was that the U.S. basically ceded this whole governance space to the EU.
And what has it accomplished? Give us some of the greatest hits.
Well, I would say the GDPR is by far the most famous hit.
The European Union's General Data Protection Regulation, known to friends as GDPR, goes into effect tomorrow.
So that was enacted in 2016.
And that is a very significant regulation in shaping the entire global data privacy conversation and legislative frameworks.
Then also antitrust.
So the Europeans are very concerned about the abuse of market power by dominant tech companies.
You have to recognize that you have powers beyond anyone else.
And with that comes a responsibility.
So there have been four antitrust lawsuits against Google that have been successfully concluded in the EU
and that have resulted in around 10 billion dollars in fines.
And then there is the content moderation space.
So the Europeans are very concerned about disinformation.
They are very concerned about hate speech and the toxic environment surrounding
internet users when they are using the platforms.
And we need to say to some of these service providers,
you have a responsibility for the way you do business to make sure that people feel as comfortable when they are online
as well as when they are offline. So the Europeans have moved to limit hate speech and limit
disinformation, even though they remain committed to freedom of expression. There is just a sense
that that important commitment to free speech is balanced
against some other fundamental rights, including a right to dignity. And a hard pivot away from
dignity to your phone chargers may be the most tangible of all these Brussels effects.
There are USB-A chargers. There are USB-B chargers. There are USB-C chargers. There are USB-A chargers, there are USB-B chargers, there are USB-C chargers,
there are micro USB chargers, there are mini USB chargers, there are light... The EU also
regulates consumer electronics. So there's an environmental concern surrounding consumer waste.
And then another concern, just the consumer convenience, if you like, the idea that we do not want the consumers to have to buy different cords
for all their different devices and all the different jurisdictions
where they are using them.
So the EU standardized the common charger,
which then led Apple to also switch its own charging port
and extend that change,
not just in Europe, but also outside of the EU.
The word from Apple basically is like,
the Europeans made us do it, but it's time and we think people aren't going to freak out.
Now, in a case like that, with the Apple USB-C charger situation,
where literally everyone around the world
who has this device will have their tech now changed
because of this EU regulation.
Why does it make more sense for a tech company like Apple
to change this charging port for the whole world
instead of just for the European market?
Tell us how the Brussels effect makes sense
for a business. So often for these tech companies, it's just a matter of efficiency
and a cost calculus. So it is not efficient to run multiple different production lines. There
are scale economies in uniform production. So they don't want to be producing different variations for different markets.
And same applies for companies like Meta's Facebook.
They pride themselves on having one global Facebook.
So if you and me are having a conversation and I'm in Europe and you are in the United States,
they don't want there to be a different speech rules
that apply to the conversation, whereby I would not be seeing a part of the conversation that you
are able to see because there are different content moderation rules. That would make it really
difficult to have effective cross-border conversations. But I would say, Sean, that the
most common reason is just simply it is just too expensive to have many
varieties of the same product. I mean, I want to bring this back to AI. Has Europe met its match
in artificial intelligence? We were talking earlier in the show about how these new regulation
proposals may not go into effect until 2026, at least fully, that's a very long time away. And this technology might look
dramatically different by then. Our guest did mention that, you know, these regulations might
have room to be augmented to fit whatever AI looks like in 2026. But this does feel like
a new day for tech regulation. So I completely agree and I concede that regulating in this space
is extremely difficult. This is a fast-moving technology and nobody knows where we are a few
years from now. But I don't think it is a reason not to intervene and regulate. There are simply
too many serious harms that we need to guard individuals and societies against.
So a responsible government does step in, even knowing that that regulation may need to be revisited.
But you cannot let the perfect be the enemy of the good.
There are also tremendous costs in waiting.
And we do not want to just watch for all those costs to be materialized. The goal here is
not to crush the development of AI, because I think it really is important to encourage that.
I think the goal is to understand that alongside those opportunities, there are non-trivial harms,
and the governments need to take those seriously.
And trusting the tech companies to self-govern, it is irresponsible because these companies are so focused on pursuing the profits that they just cannot afford to spend enough time on thinking about what happens to democracy, what happens to individual rights. And that's not
even their expertise. And that's why I take comfort that there seems to be global momentum growing,
that the governments are now increasingly seeing that, look, we need to regulate this space.
And now they have a template. Now they have an example that the governments can step in.
And when they can step in, how do they do it? Well, they can look at the EU's AI Act.
And I think that is a very powerful example for the rest of the world.
Anu Bradford, Columbia University.
She's the author of The Brussels Effect, How the European Union Rules the World.
But even more recently, and also pertinent to our conversation, Digital Empires, The Global Battle to Regulate Technology.
Our show today was produced by Amanda Llewellyn.
It was edited by Matthew Collette, fact-checked by Laura Bullard, and mixed by Patrick Boyd.
Today Explained!