Orchestrate all the Things - The State of AI Ethics in 2022: From principles to tools via regulation. Featuring Montreal AI Ethics Institute Founder / Principal Researcher Abhishek Gupta
Episode Date: January 31, 2022What do we talk about, when we talk about AI Ethics? Just like AI itself, definitions for AI ethics seem to abound. From algorithmic and dataset bias, to the use of AI in asymmetrical / unlawful... ways, to privacy and environmental impact. All of that, and more, could potentially fall under the AI Ethics umbrella. We try to navigate this domain, and lay out some concrete definitions and actions that could help move the domain forward
Transcript
Discussion (0)
Welcome to the Orchestrate All the Things podcast.
I'm George Amatiotis and we'll be connecting the dots together.
What do we talk about when we talk about AI ethics?
Just like AI itself, definitions for AI ethics seem to abound.
From algorithmic and dataset bias, to the use of AI in asymmetrical and unlawful ways,
to privacy and environmental impact.
All of that and more could potentially fall under the AI ethics umbrella.
We try to navigate this domain and lay out some concrete definitions and actions that
could help move the domain forward.
I have a few questions around to cover and I have to say that, well, I knew already because
I had checked some previous versions of the reports that you published that there's a lot of material there.
And yeah, I really wonder how you guys managed to cover all of that, actually, you know, read all those papers and do all thought, and also considering that there's people, you know, like listening to the podcast who have no idea who you are and the work that you do.
That's why I thought it's a good thing to start with a little bit of background about yourself and what you do and how this institute was funded, what it's supposed to do and, you know, just background information.
Yeah, yeah, absolutely. So, yeah, first off, you know, a big thank you to you for inviting me onto your podcast
as well.
So I'm Abhishek Gupta, I'm the founder and principal researcher at the Montreal AI Ethics
Institute.
The Montreal AI Ethics Institute is an international nonprofit research institute with a mission to democratize AI ethics literacy.
We do that through various initiatives that center on community building, on education and doing fundamental research.
My background is I graduated from McGill University in Montreal with a degree in computer science.
And so that's, you know, that's my background coming from a technical space.
But over the years, as I've been working in this space, I've been upskilling myself in the social sciences as well,
because I think the right approach to the space of AI ethics
is an interdisciplinary one. And, you know, I was lucky to be involved with people who
were, you know, generous with their feedback in, you know, telling me what are the areas in the
social sciences that I need to cover and dive into so that I could supplement my technical background.
And that sort of helped me round out, you know, my sort of academic and, you know, practical background in terms of the domain. I also work as a machine learning engineer at Microsoft in a team
that's called Commercial Software Engineering. So we are a special division within Microsoft
that gets called upon to solve
the toughest technical challenges
for Microsoft's biggest customers.
What that means practically,
at least for my work in the domain of AI ethics
is that I have hands-on applied experience
with building machine learning systems.
So a lot of the things that we talk about,
a lot of the issues that we talk about, a lot of the issues that we talk about,
they are quite concrete to me.
And so are the solutions as we think about them.
So it's not so much only thinking about these ideas
in the abstract, but also thinking about,
well, how do we put these principles into practice?
A little bit more about the Institute itself. The genesis of
the Institute was as an initiative for lowering barriers and increasing inclusivity for people
from all walks of life to participate in the discussions around AI ethics. Back when we
started, it will seem surprising now that AI ethics has become so
mainstream. But back then, these discussions were being had only in small pockets around the world.
They were quite fragmented. And the degree to which people were able to participate was quite limited, both because
of self-erected barriers, but also because of barriers erected by others.
Self-erected barriers were perhaps a lack of confidence in one's own abilities, be that
thinking that you need a PhD in AI to be able to,
you know, comprehend and participate in these discussions.
And the barriers erected by others were, you know,
stemming from a place where people thought that, well,
you know, you need to come from a certain background
or to have, you know, certain credentials
before you should be allowed, you know,
in the room on on the table,
to talk about these issues. And so the genesis of the Institute was, well, how do we create a space
so that we can invite meaningful discussion? And I emphasize the word meaningful here because
it's important that the discussions come from a place of informed knowledge
and come from a place where we understand the nuances
of these issues.
It's not enough to say, hey, do no harm.
Yes, that's a great idea, but what does that mean?
Do no harm to whom?
What does harm mean?
And who are the arbiters of whether we're doing harm or not? And all of these
things, you know, are not new ideas. They've been debated upon and thought about in other domains
as well, which is why I think, you know, I go back to what I started with, which is that an
interdisciplinary education, or at least, you know, being open and listening to people from
all walks of life is important because they've perhaps already grappled with these issues and
thought about these issues in depth before so learning from them building on those ideas I think
is the way to go. Yeah I do agree that having an interdisciplinary approach is the right way to go, especially for something as wide, let's say, and also as deep as this topic.
So AI ethics and that also shows, I should say, in the way that you structure your report.
Actually, I like the fact that it's structured precisely like that.
So both wide and deep.
So you cover some research in depth and then you also cover some new developments, let's
say, trying to cast a wide net into figuring out what's going on in the field. So besides yourself,
I know that there are also a few other people involved in the Institute working there. And I was wondering a couple of things. First, if they also have
different backgrounds, interdisciplinary backgrounds. And the second thing I was wondering is
whether this is a full-time job, let's say, for you or for the others as well, or this is a passion project, basically? Yeah, those are good questions. So we've always operated in a manner
where we have the potential to invite people to work with us, either in a full-time or in a
part-time capacity. At the moment, our current staff makeup consists of everyone volunteering their time in addition to their full-time roles,
including myself. And each of them come from a very different background, which I think
is what allows us to cover these topics in their breadth and their depth. So, you know, one of our staff members is a professor at Union College. And so she brings,
you know, a wealth of experience, especially around pedagogy in AI ethics. So how do we
actually teach AI ethics effectively, especially as, you know, there's been rising interest,
of course, in the domain. A lot of universities, a lot of even corporate organizations are creating training programs
towards equipping their employees in the case of companies, but in the case of students,
equipping them with necessary knowledge and skills in this space.
How do we do this effectively?
So she brings that
wealth of experience. We've got one of our staff members, she comes from a background in business.
And what that helps us do is also articulate the business case really for AI ethics. So,
and this perhaps, you know, is becoming more commonplace now
as we're seeing many more people
make the case that actually
following the path of AI ethics
makes you, you know,
more competitive in the marketplace,
especially as consumers
are becoming savvier that,
you know, products that have a responsible lineage,
a responsible supply chain, are not only good for the planet, but also often lead to a better
product or service. Another one of our staff members comes from a background in philosophy. So a deep understanding also of areas like Ubuntu ethics and non-Western
ideas of philosophy, which I think is, again, a testament to the fact that we shouldn't just be
looking at Western ideologies when we're trying to formulate these principles and practices, because if we solely look at one way of thinking,
we're repeating patterns of colonialism, of tech solutionism, and exporting, you know,
one way of thinking from a very, very small part of the world to the rest of it, which,
you know, obviously doesn't serve the needs of those communities.
And finally, my co-founder comes from a background in entrepreneurship,
which I think is a bit of a differentiator for us in the sense that we like to operate quite nimbly.
In fact, looking at the sort of resources, the staff, the time that we have, we're able to
stand in league with some of the more well-funded, bigger institutions out there as well in terms of the output that we're able to produce and the impact that we're able to
create. That comes from our entrepreneurial mindset,
really thinking about, well, how do we best serve the community
with the limited resources that we have?
So it's a good mix of backgrounds that I think we haven't really seen
elsewhere in similar institutions, and we're proud of that.
Yeah, I agree.
Having people that have different backgrounds really does help in being able to see problems from many different angles.
And especially the fact that you have someone with an entrepreneurial background, I think is particularly applicable to what you do because well in many occasions there's a conflict let's
say there in entrepreneurship when people are trying to get something fast to the market or
trying to have maximum impact and then you know there's the ethical considerations and there can
be a tension there so it's good to have someone on board who has been through that, I guess. And you also spoke about the impact that you are trying to accomplish through the Institute.
And I already know of one of the ways that you are trying to do that, which is actually the occasion why we're having this conversation.
So you publish periodically, I think twice per year, if I'm not mistaken, these reports, which again are quite comprehensive, I would say.
They cover lots of ground, many different areas as well.
What other activities is the Institute involved in?
So how do you try to disseminate first the other research that you do, that you publish in this report? And then are you also in touch with
standard bodies or organizations or other activities? Yeah, so we actually have a very
wide range of activities that we, you know, either host ourselves or contribute to.
So let me start with some of the fundamental research that we do. So we've got a network of researchers, of collaborators of the Institute who embark on typically under-explored and under-examined areas.
To give you one example, the environmental impacts of AI, of course, passed the end of 2020 with Dr. Gebru's firing from Google gained a lot of significance, but we had started working on that area much earlier.
In the early parts of 2020, we had already started to investigate the environmental impacts of AI.
In fact, we put out a publication, presented it at several conferences as well. And that was a collaboration between
a researcher who is at Concordia, another university in Montreal, myself and someone
from CMU. And, you know, we spent quite a bit of time, you know, understanding, well,
what are the environmental impacts of AI systems first and
foremost, but also what does it take to trigger behavior change in the practices of developers,
designers, the various stakeholders involved in the AI lifecycle, so that we can actually mitigate
those environmental impacts of AI. Other areas that we've invested efforts in are looking at how you
integrate effectively these technical measures in the AI lifecycle. So taking an MLOps approach,
which is how large AI systems are managed in production and, you know, all the big organizations that build and deploy
these systems as a part of products and services. And finally, you know, looking at other areas like
what are the research trends and what are the gaps in those research trends that need to be filled
when it comes to AI ethics. So doing quantitative analysis, looking at power dynamics in the domain of AI ethics. In fact, we have a
publication with a co-author of mine under review at FACT 2022 that looks exactly at these power
dynamics, specifically in the context of auditing processes and requirements in the domain of AI ethics.
And the goal with all of this fundamental research
is to elevate the level of conversation
that we have around this subject in our communities
in this domain.
And we do that through disseminating it
through the state of AI ethics reports, as you mentioned, but we also do that through disseminating it through the state of AI ethics
reports, as you mentioned, but we also do that through our weekly newsletter that we publish
called the AI Ethics Brief that captures, again, a lot of these insights and shares them out
almost real time, you know, as we can say, in a sense, by publishing it on a weekly cadence. But we also host sessions like public workshops
that invite people from all walks of life to come in, have an hour and a half, two hours with us,
our staff, and deep dive into these areas where our format is a little different in the sense that it's not a one-way lecture or a one-way presentation.
What we seek to do is to invite people in and really give them also the opportunity to chime in with their ideas.
Because it goes back to what I was saying earlier, which is that people from all walks of life,
you know, have something to contribute
given their varied backgrounds
and it behooves us to listen to them
and incorporate those ideas into our practice,
into our understanding of the field.
And so our sessions are structured,
our public workshops are structured
so that we do that work by inviting these people,
listening to them, giving them an opportunity to share what their aspirations and concerns
are about the field.
And finally, we work with a whole host of government bodies, public entities, multilateral
organizations.
For example, we worked with the Joint Artificial Intelligence Center, the Department of Defense in the United States.
We worked with the Office of Privacy Commissioner of Canada.
We've worked with IEEE.
I chair the standards working group at the Green Software Foundation.
We've contributed to publication norms for responsible AI work at the Partnership on AI.
We've made contributions to the Scottish National AI
Strategy, we've worked with the Prime Minister's Office
in New Zealand, and we've done work with the UNIDER,
so the United Nations Institute for Disarmament Research
in assessing the AI ethics aspects
of autonomous weapon systems as well.
So we're lucky that these public entities, these government bodies,
multilateral organizations trust us enough to bring us into their processes,
into their thinking, and allow us to make contributions there.
One of the most exciting
contributions that we've made so far that I'm really looking forward to this year is the
Algorithmic Accountability Act 2022 that's going to be put up for discussion in the US. So this is an effort that's being led by Senator Wyden
in the US.
And we've been involved in that process
since the very early days of reviewed multiple drafts,
provided comments.
In fact, we're happy to see some of the suggestions
that we've made show up almost over pattern in the text of that
Algorithmic Accountability Act as well. So yeah, those are a few ways that we
help to contribute to this very wide ranging domain of AI ethics.
Yeah, indeed. And I wonder when do you ever get to sleep?
I mean, having day jobs and doing this work,
but let's stick on the wide part
that you concluded with,
because actually that's the next thing
that I would like to touch on with you.
So indeed, it is a very wide domain.
I think one, AI ethics is a term that potentially means
many different things to many different people. And I was trying, you know, as a kind of mental
exercise, let's say, to make a list of the different areas that this term invokes for me.
So I could think of things like bias, algorithmic bias, like the one that you just mentioned that this act is meant to counter.
I could think of use of AI in asymmetrical or potentially unlawful ways, such as using a military or face recognition,
or the fact that many of these very lucrative otherwise AI models rely on very low paid work
or data privacy and surveillance in general.
And there's also the environmental impact that you touched upon
and has gotten quite a high profile in the last couple of years.
And I think that's a very good thing, actually.
And there's also national and international policy.
I know that in the reports, you also touch upon privacy.
And to me, that's a broader issue, not necessarily strictly related with AI.
There is some overlap, but I wonder whether it strictly falls under AI ethics.
But that's as far as I go, let's say, do you have like a more
concrete, let's say definition of what AI ethics is, what it
applies on?
Yeah. So, and, and that's a, and, and that's, that's been one
of the challenges I think as well. Right. And, and, and we,
we, we sort of touch on that in the report too, where just the sheer number of
sets of principles and guidelines that are out there that each try to slice this area or segment
or categorize this area into subdomains is sometimes overlapping, sometimes not, sometimes
terms are used differently. For me, I think, you know, I strive to put it under
four broad buckets, at least as a mental model, which is in terms of the privacy and security
implications, in terms of reliability and safety, then in terms of fairness and inclusiveness, and
in terms of transparency and accountability. And these are, I think, sort of four core pillars for me that,
you know, shape, constitute the entire domain of AI ethics. And of course, you know, they're
always changing, you know, meanings and context for each of these terms. Also, you know, new areas
come up now and then. But at least my thinking at the moment centers on these four pillars.
I think you also make an interesting point at the same time about privacy being a much larger issue than just being a sub-area of AI ethics.
And that, I think, raises for me two things. One, a lot of these problems also, you know, when we frame them as sub areas of AI ethics is problematic, potentially, because we, you know, we might be missing the bigger picture, which is that the harms that arise from the use of AI are not purely because of the use of AI, but they're also
related to the associated changes that happen in the software infrastructure, for example,
that surrounds an AI component. It's never, you know, that you expose a raw AI model to a consumer
or a user, right? I mean, it's wrapped up in some sort of functionality in
a product and service something, right? So there is an associated software infrastructure around
it, a product, a service, and that has included in it many design choices, many ways of how it
has been conceived, how it is maintained, how it is deployed, how it is used, et cetera.
But also the impact that all of these products and services have on the ecosystem that surrounds
them, right?
And the ecosystem really being the socio-technical ecosystem, right?
So how does it reshape relationships between humans, et cetera?
So if we frame, you know, all of these, you know, sort of subdomains as one part of this bigger,
broader AI ethics umbrella, sometimes I feel that maybe we're limiting discussions a little bit,
just, you know, to a particular lens. But when you look at things like privacy, on the other hand, I think the introduction of AI into a lot of products
and services has had implications that exacerbates privacy problems compared to what we were capable
of doing before and capable in a negative sense, in the sense of intruding on people's privacy, what we were able to do before versus
now, powered by AI capabilities, we're able to, you know, almost shine a light on every,
you know, nook and cranny, every crevice, sometimes even crevices that we didn't even
think to look for. We're able to now sort of shed this, you know, stadium-sized giant spotlight on,
which really exposes, I think, all aspects of our lives, unfortunately,
ones that we, you know, perhaps would like to keep private. And so I think this, you know,
when we talk about, okay, well, what does AI ethics mean? A part of it is, you know, when we talk about, okay, well, what does AI ethics mean? A part of it is, you
know, we have to offer some definition. Otherwise, you know, how do we talk about something if we
haven't really defined it? But also, I think it should come with a little, you know, asterisk
and say, hey, this is just one way of framing and discussing it. There are many others. So, you know, I think keeping an
open mind as we discuss this, you know, very, very broad subject of AI ethics is how I think about it.
Yeah, you're right about the disclaimer to me. It was also good to try and get a feeling of,
you know, what the conversations are that are being had at the moment.
And getting a read through your report helped me do that.
And after doing that, to the extent that I managed to do it at least, I still get the impression that it's very much still under formation, both the definition and the discipline itself.
So there is many different voices and many different people pulling in different directions, let's say.
And well, let's actually get to that part.
So the new report and what's included in it.
And I know we don't have a whole lot of time to cover it all.
Even if we did, it would be impossible.
So I just tried to pick a few topics that caught my eye.
And of course, feel free to also identify topics
that you feel are worth highlighting through it.
And I would start with actually something
that I think pretty much resonates
with what you have said so far.
So I saw some research
called the proliferation of AI ethics principles and in that there's the authors try to set some
light on what are or what should be the guiding principles for AI ethics and they for me the
takeaway from that is that well it's there's a lot of things to consider and many different views, basically.
So I don't think the domain has crystallized on a set of guiding principles yet.
Would you like to comment on that?
Yeah, yeah, absolutely. So I think one of the takeaways for me, similar to you, is that we haven't
crystallized on a set of unifying principles, as it's referred to in that particular piece.
And it almost makes me wonder that over the past maybe two to three years, I think the piece itself covers, you know, publications that have,
that, you know, papers dating back to 2018.
So I guess, you know, three,
three and a half years at this point.
And the fact that we haven't been able to converge
on one unifying set of principles,
maybe tells us something that perhaps
that is the wrong North Star to go towards,
or, you know, to the extent that it maybe shouldn't be a North Star at all. And what I mean by that is often if we try to look for
the broadest set of unifying principles, it necessitates that we become more and more abstract. And what I've found,
at least as a practitioner, is the more abstract we become, yes, it is useful as a point of
discussion and framing conversations at the broadest level and perhaps guiding research.
When it comes to practical implementation, we need a little bit more
concreteness. Saying very, very abstract and very high level makes it incredibly difficult for
someone who, you know, let's get very practical here, right? So if I'm working on a project,
and, you know, it's a three-month project, and, you know, we've been working on, you know, issues of,
you know, bias, let's say, in the system, and, you know, we've put working on you know issues of uh you know bias let's say in the system
and you know we've put in place some practices etc um to mitigate bias and you know we're nearing
the end of that project um uh you know deadline if i have only abstract ideas and abstract principles
to guide me project pressures pressures, timelines, deliverables
will make it incredibly difficult to put any of those ideas into practice because
that is not the time to distill those ideas and come up with concrete practices that
operationalize those principles. So I think the fact that we've spent so many years now
trying to come up with a unifying set of principles
and haven't really succeeded is testament to the fact
that maybe that is the wrong thing to be looking at,
that maybe we should be considering looking at principles
that are more catered to the domain, to the context,
and work specifically towards creating things that are more catered to the domain, to the context, and work specifically towards creating things
that are more concrete manifestations
that actually help guide the actions of practitioners.
Because ultimately, what is the point of discussing
this domain of AI ethics and working in it
is ultimately to steer the actions of the change behavior
of the various stakeholders in the AI lifecycle, not just as a thought exercise.
Of course, it is useful.
It helps us articulate what we as humans value and what we would like society to be.
But that's not the end be all and end all.
What we really want is to create change, right?
And to create change, folks who don't spend all their time thinking about these issues
in the abstract need some concrete guidance.
And so I think with the proliferation of these AI ethics principles, I think perhaps a signal
for us from that is that we should be moving towards creating more concrete manifestations of actions
that people can take in their daily practices so that they can realize those principles in practice
yeah yeah indeed i i can totally sympathize you know putting myself in the shoes of somebody who
as you said is working on a project with very specific deadlines.
And giving that person abstract guidelines is fine, but it's not necessarily going to help them
do the right thing in their specific project. So on the other end of the spectrum, the more
practical end of the spectrum, there are also some tools, apparently. And this is something that I
was wondering myself, actually, before reading this particular issue of the report, whether there are
any tools around, and if so, what these tools could potentially be used for. But actually,
bridging the gap, let's say, between the abstract guidelines and the very concrete tools that people may use to accomplish certain
tasks. Do you think that there is something in the middle that could actually nudge, let's say,
people to use those tools? And I'm referring specifically to regulation, having in mind the
example of GDPRs. So I think if you think about it, GDPR, which is more specific to the privacy
and data management part, it also maybe started a little bit similar. So from a set of guiding
principles, which were then expressed, let's say, as regulation or specific things that need to be
done by organizations. And then eventually tools also came in the market that actually help
organizations implement those policies.
Do you think that could be a way for AI ethics as well?
Yeah, it's great for two reasons, right?
One, I think, you know, as we were just talking about it,
it helps us move one step closer to becoming more concrete.
It also, I think, helps steer the actions of people in a more defined fashion.
And in particular, I think what regulation does is it accelerates the adoption of these tools,
of these ideas, of these practices.
I mean, you know, look back again to 2018, right, May 25th, when GDPR came into effect.
What was interesting was that the 12 months leading up to that date, a lot of companies started, you know, scrambling towards making sure that they were GDPR compliant so that they wouldn't have to pay fines if they were in violation. And what that meant was that product teams quickly altered their roadmaps so that privacy became front and center, so that the mechanisms, for example, reporting
privacy breach to a data protection officer, those mechanisms were put in place. And those
were things that, you know, if you look back, the ideas that were proposed as a part of GDPR weren't,
you know, groundbreaking in terms of being very novel. Like those ideas have floated around
for, you know, for more than a decade or perhaps two decades, right? In terms of,
well, these are the things that we should do if we want to uphold the principles of uh privacy right um but what the gdpr did was that it put a timeline
it put a firm timeline with concrete fines and concrete actions that were required of firms so
i think i think regulations are a great step in that direction um in that I think it helps to, you know, it creates a forcing function
that, you know, accelerates the adoption of these ideas. And also the other, you know, benefit is
that the actors who wouldn't do this normally on their own, because they're invested in, you know,
other areas of, you know, whatever product service development now have no choice.
They have to invest efforts into being, for example, privacy-centric in the case of GDPR
compliance, but in other cases, for example, with the Algorithmic Accountability Act,
really putting front and center some of these societal impacts of AI
as a core consideration.
So regulation does then two things, right?
It creates an accelerated timeline
for the adoption of these principles into practice.
And other, it also steers the entire ecosystem
of organizations that are building these products
and services to actually take these ideas and put
them into practice, which is maybe something that not all of them would have done if there
was no forcing function for them. Yeah. So what's your take on the current regulation landscape, let's say. EU seems to sort of be leading the way again, having
introduced new regulation in the last year. You just mentioned that the US is taking some steps
as well. China also has issued some regulation guidelines recently. So what's your take on those? Do you think that they reflect on,
well, not just the philosophy, let's say, of the different nations that are behind those
regulations, but also the politics and potentially geostrategic interests, for example? Yeah, that's a great question.
I mean, there's a lot to be said there, right,
in terms of the state of regulations
that are being proposed all around the world.
So, of course, the EU, I think, is, you know,
leading light in this space, you know,
as they've sort of trailblazed their way first with, you know,
GDPR, but also other, you know, regulations that have come and gone in the EU, you know,
it's commonly framed as the Brussels effect, right? So things that get developed in the EU
kind of set the gold standard in a sense for the rest of the world. And I have thoughts also in terms of
whether that is the right approach of taking a single gold standard. And maybe I'll just expand
on that right away and then come back to the rest of the state of regulations across the world.
So let's take the idea of, or let's take the GDPR as an
example, right? So when we're talking about the Brussels effect and the spread of the ideas from
GDPR, yes, they are great ideas, right? But let's not forget that they inherently enshrine ideas of privacy that are Eurocentric, right? Of course, because it's been
developed and, you know, deployed in the EU. But what it I think sometimes misses is cultural
nuance and context, as the GDPR is taken as a model, and it almost becomes a sort of restraint or a constraint in terms of how privacy policies are being formulated in other parts of the world.
So you have the PDP bill in India, for example, you have some privacy legislation coming up in Vietnam.
In fact, you know, collaborators of mine, one from the City University in New York, who, you know, is from Vietnam,
she, myself, someone from the Vidhi Center for Legal Policy in India, and another one of our
collaborators from Vietnam, we're all working together on analyzing this subtle yet pernicious
effect of how we tend to sort of fixate on,R as this gold standard and exporting that to all other
parts of the world where the policy making process gets constrained artificially because people try
to again emulate just that and forget about what privacy might mean if sometimes it means something different
within a local cultural context and making sure that, for example, in the Indian context, that
we are sensitive to what privacy means to Indians and how they would like for
able to protect their privacy rights based on their own privacy definitions is something that warrants
consideration. I'm not saying that it is, you know, phenomenally different. Maybe it does
actually end up converging to the same place that the GDPR has. But the fact that we don't
invest efforts in understanding what privacy means locally, what the cultural context and nuance is,
is, I think, the pernicious part of this Brussels effect, this gold standard setting.
So that's, I think, you know, sort of one aspect of the state of regulations today.
The other aspect being that I think as regulations are being formulated in different parts of the world,
so you mentioned, of course, for example, you've got the EU AI Act from the EU. You've got some things coming out of the US as well. China has got now things that it is working on.
What it might do is create an even more fragmented ecosystem, placing higher burdens on organizations that strive to get compliance to each of these regulations. it might actually unfavorably or sorry, disproportionately favor a small set of
organizations who actually have the resources to invest in becoming compliant to each of these
regulatory areas. And then, you know, at least globally, they'd be able to grab a larger share,
a larger market share compared to resource-constrained organizations who maybe just don't have enough
resources to be able to become compliant all across the world.
How do we bridge that?
I don't really have answers just yet, but I think it's something that we need to be
thinking about, which is, well, what are the market effects of having different sets of
regulations that impose different sets of requirements that are non-overlapping
that make it difficult to comply with each of them, perhaps negatively impacting
the competitiveness of various markets. Yeah, that's a valid point and i think uh initially at least the response to gdpr for
example was for many companies who just didn't have the resources or thought that the european
market was not important enough for them was simply to pull out of it and so that they don't
have to comply exactly and and and and i think also you know, it does two things, right? So for those companies that pull out, they've reduced then the choice, the consumer choice in the local market, so in the EU market in that case, right, because now you've eliminated some offerings, but also in the broader sense,
decreases diversity, also perhaps hinders innovation
in a sense that we're really striving to impose
sort of these constraints on organizations
that require a heavy dose of resources.
My sort of, you know,
counterpoint to that would be, if at the same time we, you know, put out these regulations,
we also make available open source tools that help these organizations become compliant,
we might actually democratize, you know, the ability for many firms to compete. So we get the positive benefits
of having these regulations that are in the interest of citizen welfare. But at the same
time, we equip companies to become compliant rather than each of them having to go in and
develop their own processes, their own tools so that they become compliant. So a parallel investment in making tools and processes open source
and available for organizations, I think, is going to be equally important.
Yeah, that sounds like a good approach.
I have to say, though, that personally, I'm not aware of an instance
in which this has been done.
I think typically the thinking behind regulations that will basically impose a set of rules and then will just let the market figure out the rest.
So for the tools part, for example, it's like, OK, so these are the rules that people have to comply with.
And then, you know, they will just, the market will answer that. So
they will emerge solutions that will help people actually become compliant. The approach you're
suggesting seems to be more holistic, if nothing else. But as I said, I'm not aware of any instance
in which it has been taken. So one example that actually does come to mind that, you know, and I admire this group a lot, is this group called Tech Against Terrorism. open source tools available to smaller organizations,
resource constrained organizations to combat terrorism
or the spread of terrorism or coordination
or terrorism activities on these smaller tech platforms.
And what's been interesting there is that
it's exactly this model that I'm talking about,
which is that they've brought together a coalition of entities who are interested in ensuring that combat these pernicious activities on their platform. So there is some precedent for it, right? But of course, it requires an organization, a body to coordinate these activities, to steer these activities in a way that actually benefits the entire ecosystem, rather than, you know, further fragmenting it so that, you know, we create this, you know, unfair, I think, and disadvantageous market structure.
Yeah, and well,
let's wrap up this regulation thing because there's
just so many other topics to touch upon. But before we do that,
I would just like to reference one last bit, one last
quote, actually, I got from an interview we had
with someone who has been
involved in the early stage
design of the internet and
when talking about regulation
he said that well AI
at the moment is probably more like
the package
carriage layer
than it is basic technology.
So in that respect, I conclude that his opinion is probably
that it may be premature to try and regulate at this point
or at this level, maybe.
So that's an interesting idea, right?
In the sense that, you know, on the one hand,
AI is a GPT, a general purpose technology that, you know, permeates sort of all aspects of any, you also, again, just the societal impacts society the various services that have been built on top of it um the the the early stages of internet development um as some of these
fundamental technologies were being put in place were with the goal with the idea of um you know
creating something that was outside of the control of uh sort of centralized organization, doing things like
providing sort of equal access and opportunity to anyone in the world who was able to plug in,
to share their ideas, disseminate their thoughts, et cetera. But if you look at sort of what's
happened over the past 25, 30 years since all of this started, that's not really been the case. We have seen centralization of power, even though the protocols themselves were meant to be standardized and outside of the control of any organization, the structures that actually did get built on top of, had some of those impacts. So maybe it's not so much about, you know,
I guess if we're talking about, you know, making, drawing an analogy here, it's not so much about
regulating, you know, the specific protocol, the packet carrying protocol, but it's more so what
kind of structures are getting, going to get built on top of it. And I think that's really what, in a sense, current regulation is also trying to do is to look at how do we address the different
structures that are going to get built on top of this GPT, this general purpose technology,
which is AI. And so I think that is perhaps, if we're talking about the different layers of
abstraction in this sort of technology stack, that's really the layer that we need to be targeting
is what kind of structures are we building
on top of this fundamental piece of technology
and not so much saying, okay, well, I don't know,
regulating a particular class of optimizers, right?
So are we thinking of Adaboost or Adam
or all of these things like RMS property? We're not looking at that level of detailizers right so are we thinking add a boost or atom or all of these things like rms property
we're not looking at that level of detail right we're looking at okay well if if we have a natural
language processing system um how is it going to be deployed how is it going to be used if you look
at something like gpd3 what are the implications of using that how is it going to transform society
kinds of biases does it have?
If I use it downstream in a product or service,
what am I inheriting?
How do I counter the negative impacts of that, et cetera?
Yeah, by the way, wrapping up this time for real on regulation,
what you just mentioned on, well,
something like regulating specific algorithms, for example,
I think some
people use that that's what is for me a kind of straw man argument to say when the EU regulation
on AI came about and they had an appendix in which they listed specific techniques that you know
maybe termed AI and they listed things such as you know bias and so on and some people came out
and said well okay look they're trying to regulate the use of you know bias and statistics and that's obviously not the case but
it may appear that some as such. Another thing that you just mentioned was well
data sets and what happens when people build models that are used downstream on those
datasets. And I found a really interesting example of that in the report again. It was an incident
referring to a subset of dataset that was actually used to build GPT-3, if I'm not mistaken, one that had been particularly curated for trying to address bias, basically.
And part of what the people who did this work did
was that they eliminated, in their effort,
they eliminated some specific parts of that data set
that they deemed to be referring to specific groups
in a negative light.
And by doing that, though, that has some unforeseen consequences.
So I guess that goes to show that even beyond the realm of regulation, just trying to be
fair and positive doesn't always work out the way that you think it might.
Yeah, no, definitely.
And I think you're 100% right on that, George.
I think the choices, the design choices that we make, the way we choose to ingest data,
how we curate, how we process, how we take all of these decisions has downstream consequences. And, you know, I think sort of on a concluding thought,
like what I really encourage, you know, people who read the report who, you know, sort of talk
to me about the subject area is that you adopt what's called a systems thinking approach, right?
So one of the pioneers, someone I really admire in the field, her name is Donella Meadows, and she wrote this book, Systems Thinking, a Primer.
And that book, I think, really should be a must-read for anyone who's trying to assess the societal impacts of AI,
because it makes you think about the various
feedback loops that exist when you're developing an AI system and what kind of second order effects
that those systems can have. Because then that will, you know, to the point that you raised,
how to address some of those concerns that sometimes end up being unforeseen consequences,
because we actually didn't think of the second order, the third order effects,
and we just sort of looked at our little, you know, part of the neighborhood, right, or part of the forest. So we really need to zoom out and think about, well, what are the impacts,
what are the broader impacts of the work that we're going to do? So that's sort of, I think,
you know, a recommendation that I would make to all those who are listening and do pick up that book, read it and see how you can apply that
to your work. Yeah. And I would also recommend, well, do pick up that report when it does come
out on Monday and I will at least try to go through as much of it as possible, because like
I said, it's quite comprehensive, but we'll pick up the part that interests you the most and just do it.
So thanks very much for your time. I guess it's about time that we wrap up and
good luck with your work going forward. Thank you so much, George, for having me
on the podcast. I really appreciate the opportunity
and the questions. This has been
a delightful conversation.
Yeah, I wish
maybe we can continue that conversation on
some other occasion as well. For the time
being, thanks a lot and
enjoy the rest of your day. Bye-bye.
Thank you. You too. Bye.
Bye.
I hope you enjoyed the podcast.
If you like my work, you can follow Linked Data Orchestration
on Twitter, LinkedIn, and Facebook.
I hope you enjoyed the podcast.
If you like my work, you can follow Linked Data Orchestration
on Twitter, LinkedIn, and Facebook.