Orchestrate all the Things - The EU AI Act: What you need to know, How to improve it. Featuring Mozilla Foundation Executive Director Mark Surman and Senior Policy Researcher Maximilian Gahntz
Episode Date: May 12, 2022After data privacy and GDPR, the EU wants to leave its mark on AI by regulating it with the EU AI Act. Here's what it is, what it means for the world at large, when it's expected to take effect..., how it will work in practice, as well as Mozilla's recommendations for improving it, and ways for everyone be involved in the process. Article published on ZDNet
Transcript
Discussion (0)
Welcome to the Orchestrate All the Things podcast.
I'm George Amatiotis and we'll be connecting the dots together.
After data privacy and GDPR, the EU wants to leave its mark on AI
by regulating it with the EU AI Act.
Here's what it is, what it means for the world at large,
when it's expected to take effect,
and what experts from the Mozilla Foundation have to recommend to improve it.
I hope you will enjoy
the podcast. If you like my work, you can follow Linked Data Orchestration on Twitter, LinkedIn,
and Facebook. Well, hi, George. Thanks for having us in this interview. I'm Mark Sermon. I'm the
Executive Director of Mozilla Foundation, and I'm here in Toronto. And I run our overall sort of philanthropy and advocacy work and are
kind of trying to push the field of thinking about how to make a healthier internet.
And a key theme for us in the last few years has been trustworthy AI, because data and machine
learning and what we call today AI are just really such central technical and social and business
fabric to what the internet is and how the internet intersects with society and all of our
lives. So we really think that that's a kind of key topic to sort through is what's the shape of
how AI works. Just like we asked the question of what's the shape of how the web works
20 years ago as we were starting to think about Firefox. So my role is really to kind of get,
gather resources and build momentum around that set of questions. Great, thank you.
Yeah, hi and yeah also thanks for having us. My name is Maximilian Gantz, or Max. I'm a senior policy researcher at Mozilla. I mostly work on issues related to AI and data. And that includes this sort of first wave of AI focused legislation that we're currently seeing, including the AI Act and the EU, of course. Thank you.
And well, that's a good segue to move on to what is, I guess, the main part of the conversation.
So the EU EI Act and Mozilla's reaction and feedback to this regulation. So first let me try and set the stage in a way by saying that
well it's obvious how this act is relevant for people who live in the EU. It may be less obvious
for non-EU residents but the way I see it and I think the way that you frame it as well is that
it can function in a way similar to the way GDPR
functions. So setting an example and also in terms of enforceability, let's say making
companies who want to be active in the EU market will comply with this sort of regulation. So
in that sense, it's relevant not just for the EU, but for the world at large. And so I think it would make
sense to just briefly touch upon the life cycle of this EU-AI Act and where we are at the moment
at this life cycle. So as far as I know it was made public about a year ago, so in 2021. And there was a call for feedback from the EU committee. And I think
this is when you first engaged with that. But I think you probably know this better than I do,
so you can share with me and the audience as well. What I might do is I might share a little bit of
the prehistory of the EI Act and how Mozilla started to think about it.
And then Max, you can talk about where things are at now and how we've been thinking about it or where it's been in the past year.
So that, you know, the prehistory really is we've been thinking about for three or four years,
the question of like, how is AI shaping the of the internet and and how it connects to social
and political questions economic questions and so when the eu formed the the high level expert group
on ai which preceded uh that act and started talking about trustworthy ai we paid attention
because we've seen that the gpr has had a ripple effect in putting questions around data
and people's rights in relationship to data
on the global agenda.
And we thought the EU may again be able to have
that kind of impact if it moved to actual regulation on AI.
And so we've really built up our focus on trustworthy AI,
shaping where industry goes with a set of values
around responsible AI from around the same time
as the EU even started talking about it
at the high level expert group level.
Thank you.
Max, if you wanna kind of pick it up,
how it moved from that into the act.
Yeah, of course.
So basically the first draft, there
is a process beforehand, but the first draft of what
this law could look like, we got in April last year.
And since then, everyone involved in this entire process
has been sort of preparing to engage.
So the European Parliament had to decide
which committee
and which people in those committees would work on it.
Civil society organizations sort of had their read
of the text and develop their position.
And the point we're at right now is basically
where the exciting bit starts in a way
because the European Parliament is developing its position.
And once the European Parliament has
sort of consolidated what they understand with under the term
trustworthy AI, and proposed their ideas on how to change the
initial draft, member states are doing the same thing. And then
we'll sort of have this final round of negotiations between
the parliament,
the commission, member states, and that's when this will be passed into law. But the EU policy making process is a bit of a long and winding road. So we actually don't have a very robust
idea of when that might be. There are some people saying they want to get this through by the end of the year.
I don't think that's very realistic. But I think we're looking at roughly a year,
maybe, until there is some agreement on what this should look like in the end.
And of course, then there's a transitional period between this being passed into law and this actually taking effect, like with the GDPR, for example, as well, which
took, I think, two years between being passed and taking effect to implement. So we have a long way
to go before this is in its final form. Yeah, yeah. And well, in some ways, that may be a good
thing, because it means that there's more time for organizations such as Mozilla, for example,
and for the public at large to be informed and eventually even also to be engaged with it as well.
And this is what you've started doing also.
If I were to summarize the core, let's say, of the approach this act is taking, I think it comes down to what's called the so-called risk-based approach.
So there is a sort of classification for AI products, let's say,
in terms of the risk.
They range from unacceptable, which basically means that products
that are classified in this category will not be deployed in any
shape or form, with a couple of exceptions maybe in the EU.
Then there's high risk and the products for which people have to disclose certain transparency
obligations, and then minimal or no risk for which there's no requirements whatsoever. In a way, this reminds me of the system that is effective in the EU for characterizing
devices in terms of their energy requirements.
Again, you have a scale.
And to the non-expert, you could say that it helps get a feeling of where its product
fits in this scale. that it helps get the feeling of where its product fits
in this scale.
Would you say this is an apt analogy?
And do you think this approach makes sense for non-experts?
Let Max take this one.
It's a good question. And the analogizing part is always an interesting way of thinking about it.
I think what's really important here is that the risk-based approach is really trying to minimize the impact of the regulation
and all the obligations that come with it
on those people or organizations that develop
and deploy AI systems that are of little to no concern.
So they really wanna focus most or all of their attention
on the bits where it gets tricky
and where risk is introduced to people's safety, to
people's rights, to people's privacy, and so on.
And I think that that's also the part that we want to focus on, because in the end, like
regulation is an end in and of itself.
So what we want to accomplish with our recommendations and our advocacy work
around this is that the parts in the regulation that focus on how to mitigate or prevent risks
from materializing, that this is strengthened in the regulation, in the final act. And I think
there's a lot of analogies to be drawn to other sort of risk
based approaches that we see in european law and regulation elsewhere um but in the end it's also
important to like look at the risks that risks that are very specific to to this specific case
um which is basically answering the question how can we make sure that AI is a trustworthy, but
be also developed and deployed with the care and the due diligence that needs to go into
this process to make sure that no one is harmed and that at the bottom line, this is a net
benefit.
Okay.
So coming back to the lifecycle of this act,
if I recall correctly, right now we're at the stage where the initial draft was published,
then members of the committee that were assigned
to look into that gave their feedback.
And this is also the stage that external actors, Mozilla included,
also give their feedback.
So I saw that there was a blog post that you published recently
that included the core of your recommendations for improving the AI Act.
And that was focused around three points,
ensuring accountability, creating systemic transparency,
and giving individuals and communities a stronger voice.
Would you like to quickly summarize those points?
Maybe Max, you can summarize the points
and then I might answer a little bit about
how we think about taking those forward and influencing the legislation.
Yeah, sure. So I mean, you rightly correctly gave us like the headlines of our recommendations.
And by accountability, what we really mean that it's important to figure out who should be
responsible for what along the AI
supply chain, right?
Because risks should be addressed where they come up.
So whether that's in the technical design stage or in the deployment stage.
So it's really important to make sure that the organizations or the people who are involved
at each stage in the process have to sort of perform
the due diligence that actually fits those steps.
The second part, systemic transparency,
I think is really important.
A, because I think user facing transparency is important too
but that doesn't really cut it, right? It's nice if a user or an end user knows when they're interacting with an AI system,
but what we also need at a higher level is for journalists and researchers
and also regulators to be able to scrutinize the AI systems that are on the market, that are put to use, and how these are affecting people and communities on the ground.
So we have to make sure that the AI Act in its final form comprises mechanisms enabling this type of scrutiny.
And the AI Act, one thing it proposes is a public database where people or organizations
would have to register high-risk AI systems and I think that's really great that can be a really
important mechanism but what it sort of excludes right now is those deploying AI so those who would
have to register the systems would be the developers. But like I said,
risk also really depends on the exact context in which it is deployed, on the intended purpose,
sort of like the organizational environment. So it's also really important that
deployers, those putting AI systems to use, have to disclose some information as to how they use these systems.
And the third point, which is about empowering individuals and communities, is about that one
thing that's pretty much missing from the original draft are the people who are affected by high
risk AI systems and who would end up having to deal with the consequences when something goes
wrong. So what we're asking for is that the AI act ultimately includes a sort of bottom-up
oversight mechanism and equips people with a mechanism allowing them to contest decisions
or to seek redress or complain to authorities.
To give you the rough summary of our three main recommendations, but I'll
pass it over to Mark to fill in the gaps I left.
No gaps. But the thing I would say is if you think about these broad recommendations, accountability,
transparency, making sure that there's a kind of public and community voice, the regulation,
as in any case with regulation of a complex ecosystem like we're talking about with the
internet and with AI, can only really provide the guardrails and the kind of framework for action.
And it's really important that at this stage,
different parties who are actually going to have to live in this framework
get a chance to weigh it into what does it look like practically
to design accountability?
What does it look like practically to develop accountability
and citizen feedback?
And so we really see our role in a process like this in helping to drive some of that conversation. We're really connected
to a part of the public, including the European public that really cares about these issues.
How do we actually get them involved in shaping this and having a voice going forward? We're also
very involved in the tech industry. What are the technical norms
in terms of building accountability into systems and understanding how developers support
suppliers to be more trustworthy or how deployers actually educate themselves
to be more trustworthy and also to build transparency into systems. So, you know, this is not purely a matter of like influencing what's in the legislation. That's certainly a part of the focus of the recommendations. But it's also this is the time in the lifecycle of an act like this to be working with all the players to figure out practically how is this actually going to play out in real time? And I don't think that that really was done enough with the GDPR and that's why our stance really is as much as a facilitator and a convener
around topics like in the recommendations as pushing for particular language.
Yeah. Well, first as a kind of overall comment, I'll say your recommendations seem to be like they're in the right direction.
They make sense and I think they really add to what was already there. And actually the question
that I had lined up for you was precisely around what you just touched upon. So first, what's the
process from this point on? And well, is there some sort of institutional, let's say, process through which feedback such as yours is taken into account?
And is there, I don't know, some kind of guarantee, let's say, that, well, yes, this sounds like a good idea.
Therefore, it will be incorporated in the final text. And the second part, which is perhaps even more vague, is how does, I don't
know, the average individual get involved in this thing? Can they, for example, get behind
your recommendations? Or is there any other way through which they can submit their own ideas or evaluate existing ideas?
Well, Max and I may have different angles on this,
so maybe we'll both take the question.
I would say the way to get involved or the step we're at is really the normal democratic process.
I mean, you have people inside of, you know, elected,
you have elected officials looking at these questions.
You also have people inside the public service
and the EU asking these questions.
And then you have industry and the public
having a debate about these questions.
And I don't think there's a particular mechanism.
Certainly people like us are going to weigh in
with specific recommendations.
And by weighing in with us, you kind of help amplify those.
But I just do think that the open democratic conversation, being in public,
allying yourself and connecting to people whose ideas you agree with,
wrestling with and surfacing the hard topics in public,
that's what's going to make a difference.
And that's certainly where we're focused.
Max, I don't know if you want to say anything more specific or technical about the process.
I mean, I think what you said is right.
At this point, what it's really about is swaying public opinion and swaying the opinion of
people who are in the position to make decisions
and to engage, which means parliamentarians, member state officials, officials within the
European Commission.
So like Mark said, it's about leading a broader debate with everyone involved and making sure
you have the right arguments to convince people.
And then, I mean, at a more grassroots level, what an individual person can do, it's the same as
always, I would say. I mean, if you're in the EU, you can write your local MEP. You can be
active on social media and try to amplify voices you agree with, putting forward good ideas.
You can sign petitions. There's all sorts of ways of being a vocal, engaged citizen, I guess.
And that's no different here.
Okay, I see.
All right, so then let's come full circle in a way to where we started the conversation.
So the background of your involvement.
I found out, I admit I didn't know previously to this opportunity, that there seems to be a framework of sorts that you have developed in order to guide your approach to AI.
And this also seems to be guiding your specific engagement with this AI act.
So would you like to explain a little bit the process through which you arrived at this
framework?
And well, how do you see this particular recommendation that you're putting forward
in the context of your overall approach? We've been thinking about this topic of trust for the
eye for a number of years now and really in many ways on almost a parallel schedule to the EU where
they had the high level expert group and considered the topics and listened to people and then came with the act. We went and looked at what are the questions that are defining
the health of the internet and where the internet is going three or four years back. And the
question of Trustworthy AI came up as a really top focus and it's why we put our energy behind that and we have worked with and built a community
around the world of people who care about how impact of people who care about how AI impacts
society and in that talk focused on sorry I lost my train of thought because I'm a little bit worried about time. I'll just say that.
No woosh.
So we've worked with people around the world to kind of map out a theory of how can
we take AI and make it more trustworthy and really focused on two broad topics with then
four places where we think we can push.
Then the two broad topics are both making sure that you
design AI systems and AI driven business models with human agency in mind, people can make choices.
And that's not always the case. You think about, you know, content recommendation engines from
YouTube or Facebook, it's actually not about us making choices, it's about choices being made for us. And so I think one of the things is how do we include human agency in
the design of AI systems? And then the second point is how do you focus on accountability?
If there are harms that come from AI as is really the focus of the act, how do you make
sure really there are consequences, which of course is going to drive different norms and different behavior in the design of systems. If people think, oh, you know,
what I'm designing is going to have an impact on whether a person gets a job or not unfairly,
or has an impact on democracy and, you know, whether the information ecosystem that we're in
is fair and truthful. So these questions of agency and
accountability are really our focus. And we think that the act is a really good backdrop,
one that can have global ripple effects to push things in the right direction on these topics.
And in that, there's four things that we think can make a difference. One, pushing the norms
in industry. Who are the people who build things?
How are they built?
And then pushing what gets built.
Do we actually have technology that keeps people in mind
and that has accountability and transparency designed in?
Third, do people start to demand
more accountable AI systems
or just more accountable tech products
which all today include AI.
And then the last point, which is the supporting one, but in our view is not the core one,
is what regulations and what legal environment support trust for the AI. But really we see
that as something that is enabling about the other things without people building different technology in a
different way and people wanting to use that technology the law you know just is is a piece of
paper it is a piece of paper i'm you know people might argue but one that can help drive things
forward and again pointing to the uh to the previous example of GDPR, it did push
things forward not just in terms of buzz and hype but also in real terms of the appointment,
well creation of new jobs for example, new roles and also pushing the source of creation,
the source for development creation forward process in terms of having
to comply with certain requirements and even creating altogether new classes of software
products.
So you could argue that you can expect similar things to happen with this type of regulation
as well.
I think that's the hope.
And at the same time with GDPR, sometimes you've gotten really interesting new companies and new software products that keep privacy in mind.
I mean, I could list a number of them. And sometimes you've just gotten annoying pop up reminders about, you know, your data being collected and cookies.
And so making sure that a law like this drives real change and real value for people is a tricky
matter and it's why right now the focus should be on what are the practical things that the
industry and developers and employers can do to make AI more trustworthy and make sure
that the regulations actually reflect and incentivize that kind of action
and not just kind of sit up in the clouds.
Yeah, well, like you said,
it's a long and winding process.
And I think in many ways, this specific AI Act
is a trailblazer worldwide.
And even still, even in that scenario, we still have a few years ahead
of us to be able to see it in full effect. So yeah, there's still a long way to go, but
hopefully that should set a good example. Yeah, let's hope. Let's hope. And we're certainly there to be a partner and a support for that happening.
Great. So thanks. I think we did more or less manage to keep it short. So I think I'm good. Unless there's any other closing thoughts that you'd like to share.
Feel free.
No, that's great, George.
I really appreciate you inviting us.
Yeah, thank you very much.
I hope you enjoyed the podcast.
If you like my work, you can follow Link Data Orchestration on Twitter, LinkedIn, and Facebook.