Orchestrate all the Things - Trustworthy AI: How to ensure trust and ethics in AI. Featuring Beena Ammanath, Executive Director of the Global Deloitte AI Institute
Episode Date: March 23, 2022A pragmatic and direct approach to ethics and trust in artificial intelligence (AI) — who would not want that? This is how Beena Ammanath describes her new book "Trustworthy AI.” Ammanath ...is the executive director of the Global Deloitte AI Institute and is well-qualified to write a book on trustworthy AI. She has had stints at GE, HPE and Bank of America in roles such as vice president of data science and innovation, and CTO of artificial intelligence as well as lead of data and analytics. Ammanath said part of her motivation to write the book was her desire to help organizations start off with ethics and trust in AI on the right foot and another part was her frustration with existing approaches to AI Ethics. Article published on VentureBeat
Transcript
Discussion (0)
Welcome to the Orchestrate All the Things podcast.
I'm George Amadiotis and we'll be connecting the dots together.
A pragmatic and direct approach to ethics and trust in artificial intelligence.
Who would not want that?
This is how Bina Amanath describes her new book, Trustworthy AI.
After reading the book and discussing the motivation and approach behind it with Amanath,
we concur. That's an accurate description.
Amanath is the Executive Director of the Global Deloitte AI Institute.
She has tinned satellites of GE, Hewlett Packard Enterprise and Bank of America in roles such
as VP of Data Science and Innovation, CTO of Artificial Intelligence and Data and Analytics
leads. CTO Artificial Intelligence and Data and Analytics Leads. She has worked her way up from SQL development and DBA and has also startup experience as well as involvement in board advisory and non-profit work.
In other words, she is experienced, well-rounded and well-positioned to write a book on trustworthy AI.
As Samanath related, part of her motivation was her desire to help organizations start
off with ethics and trust in AI on the right foot, and another part was her frustration.
I hope you will enjoy the podcast.
If you like my work, you can follow Linked Data Orchestration on Twitter, LinkedIn, and
Facebook.
So my background has been across a variety of industries. And I've been at Deloitte for the last two and a half years now.
But before that, I was a CTO for AI at Hewlett Packard Enterprise.
Before that, I led data science and innovation at GE.
I've worked at Bank of America, Merrill Lynch, E-Trade, a few startups, which no longer exist,
British Telecom.
So throughout my career, I've always been anchored
in the data space. I've studied computer science. So I started out as a SQL developer, a programmer,
grew into DBA, led data teams. And I've always kind of tried to do between large companies and
startups so that it keeps me you know, keeps me grounded and
keeps me agile. And it's always fascinating to walk into a totally unknown domain and scenario
and then try to figure it all out. Right. And I think part of it also has been the part of the,
my career success has been anchored on that fact. Though in the past few years, I have realized
and acknowledge my strength is exactly that ignorance and going into a brand new company
and then asking those questions, which really drives transformation, right? And so we had
building AI products, AI solutions, data products and solutions, and taking it to market end to end.
And I realized being an intrapreneur within a large organization is something that, you know,
that is actually my strength. So that's what my role has been in the last, you know, prior to
joining Deloitte. At Deloitte, I lead our Deloitte AI Institute, which is a global AI institute, looking at just the applied AI space, because that's what I needed when I was in my prior roles,
right? Because the space is growing so fast, so rapidly, you know, there is so much noise around
it, right? But when you're part of a business, you know, the problem you're trying to solve,
optimize your supply chain, for example, right? What you don't a business, you know the problem you're trying to solve. Optimize your supply chain, for example, right?
What you don't know is, you know, how do you solve for it?
Is there an AI product that exists that you can buy off the shelf?
Is there a startup that's working on it that you could potentially acquire or be part of
so that you can, you know, accelerate the product roadmap?
Or do you partner with academia?
So, you know, AI Institute tries to bring that,
all of that together,
along with the fuzzy aspects of regulation
and ethics and the lack of diversity.
How do you make sure the AI that you're building
is robust and reliable and trustworthy?
Okay, thanks.
That was a very nice intro,
which sort of explains the angle you've taken for the book.
And so the book, which is about to be released, if I'm not mistaken, on the 22nd of March, is called quite aptly.
It's quite aptly named Trustworthy AI.
And actually, the first thing that drew my attention was the title itself.
I think the term Trustworthy AI is probably getting less publicity,
let's say, than other terms such as AI ethics or responsible AI.
And this is an area I also have an interest in. Well, we're not alone
in this. I think it's a concern for many people for many different reasons. However, both AI ethics
and responsible AI also are a bit fuzzy, in my humble opinion, in the sense that, well,
they entail too many things. They have have so many facets and so it's difficult to
to pinpoint them let's say and I think one of the things that it looks in the book that one of the
things that you've aimed to do is actually sort of pinpoint this trustworthy AI term in more concrete, making it more concrete, basically.
So you've listed different facets and you've tried to use them as different angles to examine
trustworthy AI and what it means.
So I wanted to ask you to just quickly iterate through those facets.
I know it's hard because it actually takes an entire book to do that, but just quickly,
you know, the executive summary, let's say,
for each one of those.
And what led you to come up with this definition, let's say?
Yes. Okay.
Maybe I'll start with that latter part of the question.
What led to this definition and then go into, you know,
the definition itself.
So we are in this interesting phase with AI,
George, as you know, where there is a lot of research still happening. The technology is not
fully mature and fully developed, but it's being used in the real world because it drives a lot of
value. And using it in the real world, actually there are side effects which we've not thought
about. Because the technology is not fully mature, we don't know all the side effects. So I put trustworthy AI in that bucket
of that third bucket, where, you know, it is all the side effects that come beyond the positive
value creation. And the reason, you know, the term trustworthiness, right? For, you know, it's great that AI is creating a lot of value and companies are using it.
But at the end of the day, you know, the end users need to trust it to be able to, you
know, to really drive that adoption and scale of AI.
So trustworthiness is essential.
And what are those dimensions of trustworthiness is essential. And what are those dimensions of trustworthiness? And I have about 13 dimensions that I go through in the safety, security, responsibility and accountability, and privacy itself.
So to take a step back, I shared with you my background.
The reason for me to write this book was because when you hear about AI ethics, you hear mostly about fairness and bias.
And then you also hear about transparency and explainability.
And that's not really the factors that fall under trustworthiness.
You know, because fairness and bias is relevant.
You know, it's a very important dimension, but only when it's directly impacting humans.
And here's what I mean by that. If you, you know, if, if you are building
an AI solution that is doing, you know, patient diagnosis, fairness and bias, super important.
But if you are building an algorithm that predicts jet engine failure, fairness and bias,
not as important, right? So it is, you know, trustworthy is really a structure to get you started to think about,
you know, the dimensions of trust within your organization, within your business context,
to at least start having those discussion on what are the ways this could go wrong and
how do we mitigate it?
So moving it from, you know, moving that whole
topic of philosophy, you know, of ethics and responsible AI from philosophical arena to the
real world arena and how can the C-suite, how can leaders, how can data scientists, how can they
just get started, right? And by no means would I say that the dimensions I've listed or the questions that
you should ask are complete, but it's a starting point because today there is no starting point.
It is so high level. It is so theoretical that most companies struggle with where do we even
get started, right? Should we hire an ethicist? Should we get an ethics officer? I think, you
know, having some semblance of structure is important.
And in my background, I think these are the dimensions that's important.
And even if you think of something like bias, right?
When you are, again, it's human data, but patient diagnosis and law enforcement, if
the algorithm is biased, terrible, right?
But if you're using algorithms to do personalized marketing
and say you serve the wrong ad to the wrong person
because the algorithm is biased,
it's a different weightage, right?
And today what happens in most businesses,
in most scenarios,
ethics tends to be put into this separate bucket.
And I think it needs to be part of the use case, the context to be able to address it.
And there should be a structured way to think about it.
Yeah, yeah, I think, well, what you said is very much valid.
So at the moment, it's sort of Wild West, let's say, out there in a way.
So people don't know where to begin well some of them are
not are more concerned than others but even the ones that are concerned are sort of at a loss
like you said so what do we do do we hire them philosophers or ethicists or how do we approach
it so helping them understand the problem and ask the right questions it's something you also
mentioned in the beginning and i have to say that you know doing what i and ask the right questions. It's something you also mentioned in the beginning.
And I have to say that, you know, doing what I do,
asking the right questions is like half,
half 50% of the success in a way.
If you manage to ask the right questions,
then you're on the right track.
And specifically for something which is so intangible,
let's say as trust,
even forgetting the whole technical infrastructure
and just limiting, let's say, trust to the concept of trust.
There's many dimensions to that, even just there.
So when we talk about trust in a person or in an institution,
there's already at least two types of trust. So I trust
that that person will do what they say they will do. But do I also trust that they're able to do
what they say they will do? So if you also add the technical, the technicalities, let's say,
and all the data aspects and bias and fairness and all of that stuff just really explodes in a way yes yes so true so true
and the reality is in more you know most ai projects uh you focus very much on the value
creation right the roi what impact can i have you know whether it's more cost savings or new products
right the focus is more on the value creation and all all that I'm suggesting is spend some time thinking about what are the ways this
could go wrong?
Because the people who are designing, building it are the most brilliant minds who are focusing
so much on value creation.
Can they spend 10% of their time to think about what are the ways this could go wrong?
Because then they can build in those guardrails right yeah and actually a good motivation to do that well besides the uh
the obvious fact that it's it's the right thing to do is that if things go wrong they can go very
wrong and so that that's really going to hurt you know the uh the end goal which is value creation
as you said so it it really pays off to uh to be uh proactive in that respect yes yes and you know even if we can reduce
we hear too often oh it's an unintended you know consequence we didn't think about it this book is
like just think about it even if you can catch 50% of the unintended consequences, it's a great start, right?
You know, it's impossible to think of all possible scenario and have 100%, you know,
coverage on the side effects.
But even if you just have, you know, 50%, I think it's a great start.
And we're not doing it today, right?
Yeah, you already talked about the importance of asking the right questions.
And I find that this is also reflected in how the book is structured.
So this is something that appealed to me a great deal.
So I think it's meant to serve as an overview and introduction, as you said.
And it looks like the main audience would be CXOs, basically,
so different C-level suit executives.
And I get that impression because, well, you use those roles
to develop sort of fictitious scenarios,
and you also use a fictitious company throughout the book.
And in that setting, you create different scenarios
and you have those executives being set up,
let's say, in different situations.
And through that, you use it to develop,
let's say, the scenario and see what can go wrong
and what these executives should
be asking in a kind of problem analysis solution way, which I find is a very user-friendly
way to approach the problem.
So I wanted you to just share a few words in what led you to adopt that approach and whether I also wonder whether
your actual real world experiences have helped in any way shape that. Absolutely absolutely look
the you know one of the advantages having worked in different industries and now at Deloitte you
know getting that exposure to different industries is, you know,
I bring a perspective that's broad, but it can also go deep, and we know that the topic of ethics,
you know, most of leaders are, you know, think it's a philosophical thing. It's more
theory, and, you know, it doesn't apply, you know, don't know how to grasp it, right? So it is targeted at CXO current and I would say future CXOs. So the
managers, you know, the data scientists who are aspiring to be managers in the future. So it is
really for anybody who works within an organization, but primarily for CXOs and the reason I structured it on a fictitious company
is because then it's relatable right it it is how do you translate this complex topic of trust
right like you said it is extremely complex it is very fuzzy so how do you get your arms around
that complex topic in a real world scenario and the company named BAM is actually my initials
being a manufacturing business.
I just came up with it.
But I wanted to make it more relatable
so that companies can actually think about it.
The other nuance, if you noticed, was also,
I don't think ethics is just targeted
for the companies who build the AI products.
Every company should be thinking about it,
even if you're just a consumer,
even if you're buying an AI tool,
you should look at it from a trustworthy perspective
because you're bringing it into your organization,
you're bringing all those risks along.
So I think it is relevant for every organization.
So it is also put from that user perspective also as a builder
perspective. And it was intentionally done that way so that it catches the user's attention and
moves from theoretical complex topic into a real world scenario. And to your point, most of the
scenarios I've encountered to some extent are variations of it, right?
There's also a belief that ethics is just for data scientists or just for the IT team.
And that's not true, right?
It's relevant for the CHRO who might be buying AI tools for recruitment.
The CFO whose team might be using AI for some kind of account management or document management. So there is, you know, it's relevant for every C-suite. It's not restricted just to
the IT team or just the CEO. I think every person within the organization needs to know
what trustworthy AI means for their organization and how does it apply to their role right even if you're that
marketing intern who is part of a vendor discussion they should know what to what questions to ask
right beyond the functionality to say what kind of data sets have you trained on right what what
are the you know aligning behind those principles and making it real in their role needs that level of acceptance.
So my hope is with the book, at least, you know, it provides a base level understanding. And trust
me, you know, I've laid out, you know, 13 dimensions, but there might be another dimension
that's relevant for their organization, right? But at least this is a conversation starter,
and it gets them started on that journey. Yeah, yeah think you know in in a way the takeaway from that would be
our health curiosity is good so um be uh don't be afraid to to ask questions and well ideally
try to make them relevant but don't be afraid to ask would be a good start. And to add to your point, I think what you described about, well,
that the fact that it doesn't just, it's very hard to draw lines, basically.
So if I'm just using, let's say, an AI system that somebody else built,
that doesn't mean I'm not in some way potentially accountable for it.
So that goes into the specific dimension of accountability,
which is pretty well explained in the book as well.
So when things go wrong, you may be accountable as well.
So you'd better know at least to the degree that concerns you
what's going on and what needs to be done to remedy it if things go wrong.
Yeah, ethics is one of those and trust is one of those, you know, it cannot be aligned to just
one person, right? Oh, this person is responsible for, you know, AI ethics across the company.
It just won't work. And I've seen it, you know, many of those examples are, like I said,
are variations from my experience. And I've seen it too often.
I was like, oh, let's just bring in an ethicist.
And, you know, that person can take care of everything.
And it doesn't, it just won't scale in the real world, right?
You know, the book is providing that grounding to the real world of, you know, don't believe
that somebody else will come and solve it for you.
You know you everybody needs
to be part of it so translating to that real world is the biggest it was the biggest gap i've seen i
haven't seen any other you know book on you know on the topic it's not like i had a lot of time or
you know i wanted to but i was getting frustrated george because of the focus on just on fairness bias
and few of these you know dimensions whereas I think if you're an organization there are so many
other dimensions you need to think about and you really need to make that decision as relevant to
your organization it's not just adopting something that's put out there and take, you know, and bringing in somebody to execute, it doesn't work.
Yeah, and to add to that, I think I've also seen many, I would say, mostly technical approaches
to addressing, let's say, fairness and bias.
So, and don't get me wrong, these are absolutely essential, even in some cases, and it's good
to be aware of them and to apply them in how you structure your data sets
and in your algorithms and all of those things, but they're not enough.
So it's not, they tend, the solutions presented in this area,
let's say, tend to border on solution, so-called solution.
So, okay, we have an issue, let's find a technical to border on solution, so-called solution. So, okay, we have an issue.
Let's find the technical problem for the solution.
Usually, it's not enough.
It's not enough, and it's not looking at it holistically, right,
from a business impact perspective.
Because you might choose, yes, we know this algorithm is biased,
but we are making a thoughtful decision to move forward with it because the impact, the risk is not as high for the personalized marketing example.
So making an informed decision as opposed to blaming it on, we didn't think about it.
I think that if we can move in that direction, then I think we've made progress. chapter, in which you deal with trustworthy AI in practice, and you try to, well, distill
the analysis in some actionable steps that people can take.
And those steps are named to identify the relevant dimensions of trust and cultivating
trust through people, processes, and technologies.
And I wonder if you could talk a little bit about those. Again, just a sneak preview, because obviously you can't explain the whole concept in its entirety in the time that we have.
And also, if you could introduce the lens of sociotechnical systems, because this is something that you use in order to explain that.
And I think it's very relevant and I think people would benefit if they knew about it
yeah so okay so the number one step is to you know think and these are more as buckets of items
it might be called differently but when you think about how do you operationalize it the first step
is to you know bring together key, a C-suite board and define
what does AI ethics mean
for our organization.
And the next step from a, you know,
people perspective is to educate,
train, make every employee AI,
you know, AI ethics fluent.
They should understand
what does our ethical guidelines mean.
And it's very easy to do. It's not something
you need to create from scratch because every company you join, you have an integrity training,
right? You just append to it, right? And say, these are the trustworthy AI principles we align
behind and this is how it applies to your role. So including it as part of your training to make
sure every employee is empowered is the first step.
And then the next step is to think about,
do you need a chief AI ethics officer?
Do you need an ethicist on staff?
In most cases, if you are designing
or developing AI products, then you need somebody, right?
You need that level of expertise
because your data scientists cannot take care of it.
It's not a technology solution, right? But if you're just a consumer of AI solutions, you probably don't need
an ethicist, but you need access to somebody who has that deep expertise who can influence because
the solution for trust, as we said, it's not just a technology problem. There is a technology
component, but there is also the component of
social sciences and philosophy and anthropology, looking at historical patterns and being able to
bring in that perspective of what other ways this could go wrong. I think that you need that
mean, but you probably, if you are just using it, you probably don't need it to hire a full-time
role for it, depending on your usage.
It depends.
So making that informed decision from a people perspective of who's going to keep an eye
on this going forward.
From a process perspective, and this is the most crucial one, right?
Looking at your existing processes and saying, where should we inject the changes that we need to make sure that we are sourcing
ethical AI products or we are building trustworthy AI products, right? That is super important. So
say you are a big tech company who builds their own AI products, in your project management tool,
whatever it is that you might be using, add in a step saying what are the ways it could go wrong.
Like every project has an ROI column that you need to fill.
Have a column which says, here are the ways that we think as a team that it could go wrong.
And it should ideally not be blank.
I'm sure there will be some things that you, you know, as you think about it.
So, you know, it could be as simple as that,
adding in a question that's, you know,
in your project management.
When you're sourcing, adding in that question,
you know, to ask from your vendors, right?
Make it part of your vendor sourcing process.
So changing existing process to, you know,
capture these challenges early on,
but then also providing
your employees a 1-800 number or you know I call it one I didn't number but it's you know the who
can they call if they feel they're stuck just like you have an integrity helpline right having a some
kind of a helpline they've got the training but they you know they are assessing a vendor and they
are not sure you know there's you a vendor and they are not sure you
know there's you know there's there are going to be those scenarios who should they call so changing
the process to include who how can they reach the SMEs if they need to right so basic changes like
that in the process is super important to make it relevant and from a technology perspective itself
it is really looking at the software that you're using, the software that you might be building, or you might be sourcing.
How do you put in guardrails within those technologies, right?
And this is most relevant for the companies that are developing AI, right? in those guardrails, embedding it within the technology, making sure that you've addressed
all the risks that you identified within your early project process to make sure those guardrails are
in place. So I think having these different buckets is a great way to start operationalizing
trust. And I'll tell you, you will never get it fully right
because it's all going to be learning.
And there are so many different angles to it
that you need so many different perspectives.
But I think you'll at least get started.
Yeah, I think that's the takeaway for all of that.
And if I may just add to that, I think what comes out of the discussion so far is that, well, none of it is, you know, alien really to people who work in organizations.
It's all about things that they do already in one way or another.
So these people, processes, technologies dimension that you just referred to, I'm sure this is
people already do that. People already
have that in
place. They just need to take
AI seriously enough
to facilitate
caring for it sufficiently
and just add it to their processes
and their inner working
basically. Yes, I agree.
Completely agree.
I hope you enjoyed the podcast.
If you like my work,
you can follow Link Data Orchestration
on Twitter, LinkedIn and Facebook.