ACM ByteCast - Ray Eitel-Porter - Episode 82
Episode Date: February 26, 2026In this episode, part of a special collaboration between ACM ByteCast and the American Medical Informatics Association (AMIA)’s For Your Informatics podcast, Sabrina Hsieh and Li Zhou host AI safety... and ethics expert Ray Eitel-Porter, Luminary and Senior Advisor for AI at Accenture and an Intellectual Forum Senior Research Associate at Jesuit College, the University of Cambridge. Previously, he served as Accenture's Global Responsible AI Lead. Ray is the author of Governing the Machine and sits on several boards and councils advising on data analytics and strategy. In the interview, Ray shares how he was inspired to research responsible AI by data privacy concerns and how biased datasets harm models. He describes his objective as helping people understand the potential risks of emerging technologies in order to confidently use them. He discusses case studies from his book where companies successfully implement responsible AI practices in the workplace, and shares how his framework will be useful even as technologies continue to emerge and change. Finally, Ray offers some advice for younger professionals in AI and medicine.
Transcript
Discussion (0)
This episode is part of a special collaboration between the ACM By-Cast and AMIA
Four-Year Informatics podcast series.
ACM, the Association for Computing Machinery, is the world's largest educational and scientific
community society.
AMIA, the American Medical Informatics Association, is the world's largest medical informatics
community.
In this new series, we talked to AI in Mexico.
medicine leaders, including researchers, practitioners, and innovators,
one at the intersection of computing research and AI applications for healthcare and life science.
Our guests share their experiences in their interdisciplinary career paths,
their lessons learned in AI and medicine, and also their own visions for the future of computing.
Hello and welcome to the ACM AMAMEA joint podcast series.
This joint podcast series is forced the intersection of AI and medicine.
I'm Dr. Sabrina Shea, Director of AI Enablement and External Information Provisor.
I'm here to represent ACM Bycass series.
My co-hosts here today with me is Dr. Lee Jo, Professor at Harvard Medical School.
She's here to represent the four your university.
Informatics Podcast with AIMIA.
Ray served as a center's lead for data and AI in the UK
before becoming global lead for Responsible AI.
After leaving the center,
he has focused on researching and writing the book,
Governing the Machine.
This book joins on many years of his experience,
advising companies, how to implement AI governance
alongside with his current position at an intellectual forum,
Cambridge University and independent advisory work.
So we say, hey, Ray, welcome to our AMIA ACM podcast series.
Thank you very much.
I'm delighted to be here.
Very well and coming to you from a surprisingly sunny, although chilly London.
Yeah, so before dedicating yourself to Responsive AI, right,
we know you have a long journey to where you are here.
and you also serve as the managing director of Accenture, leading the team focused on the UK data and eye practice before.
Maybe you can tell us more about yourself, especially what makes you come into this career of data and AI in the first place.
Yes, absolutely.
I mean, I have always been fascinated through my career with the power of technology,
how technology can enable us to do new and new.
interesting and valuable things. And I've worked in that space in a number of different ways over
many years. And when I was at Accenture looking after their data and AI business in the UK,
it was an exciting time because we were just moving from sort of the early days of data analytics
to more advanced data science and capabilities, what we would now call AI, but we didn't
call it AI in those days. And then we started to see some of the risks emerging. And I think the
first two risks that really started to catch people's attention were bias in historical data sets
that were being used to train AI models. And that bias would then perpetuate or even be
accentuated by the model. And the other one was data privacy and, you know, potential misuse or
revealing of confidential private information that shouldn't have been used or shouldn't have
been revealed. And what I started to recognize was that organizations that wanted to use AI and
advanced analytics, and that was my objective, was to get them to get the value out of these
technologies, they were being held back or made cautious because of the potential risks. And I thought
that this was an area that would grow and would be really important to find ways to help
to reduce those risks so that people can be confident in their use of AI and similar
technologies to really get the benefits.
Thanks very for telling us more about yourself and your tradition from the traditional
analytics practice to become the global authority of responsible AI.
And can you share with us a hard moment in your career that.
led to you for this tradition?
Yes, I think it was probably reading a book called Invisible Women, which looks at how society
has been actually very biased against women over many decades.
And it presents, it's very well written and it presents many examples of how data, and
not with advanced analytics, but just how data has been used or people have done things in a
way which are not representative. One example that sticks in my mind is the use of crash test
dummies for testing cars and how good they are at protecting people in an accident.
And these crash test dummies were designed against a male body. And a female body is
different, as we will know very very well from the anatomy and medicine, etc. And therefore,
only testing on male bodies is not the best way to develop safety systems in a car, which will
be equally safe for women and for men. And this was just one of many, many examples in this book
that open my eyes, I think, to the risks that can be involved in taking historical data sets
and not considering whether they are biased against any particular part of the population. And
might be women, there may be many other groups that might be disadvantaged or biased through
misrepresentation in the data.
Thanks so much for mentioning that.
We actually in Amia had a woman in Amia community that had looking to this issues a lot.
And glad to hear that kind of issues brought you to this field.
So now we can maybe type in a little bit to your book.
So especially chapter three, right?
you have this book title chapter as Who's Provin is it anyway?
So if you were imagining this for the health system now, right?
What would you think the problem really?
Who does this problem really belong to in the healthcare system?
Is it a CEO?
Is it a vendor?
Is it a doctor?
What would you say?
Well, I'm going to say that it belongs to everyone,
which in one way you may think is not particularly helpful
because obviously we need to have a clear accountability at the end of the day.
So if we're thinking about designing and implementing an AI governance process, yes, there has to be an owner for that.
But I think the point I'm trying to make is that you can't implement good AI governance without involving multiple different stakeholders across the system.
And in many ways, I think who the ultimate accountable owner is doesn't matter too much.
What they do need to have is the ability to collaborate and to bring other people into the discussions
and to be able to get people to agree who's going to be accountable for what,
how are the different parts of the business going to work together.
And then, as you mentioned, patients, vendors, etc., procurement have to be involved.
the doctors have to be involved and so on.
So really, whoever's leading has to be a master of orchestration in bringing these different
people together.
So this is really not just one person's power.
Absolutely.
Thanks, Ray.
Another rapid fare question is related to your chapter about governance paradox.
You argue that governance is a salarate.
Give us the certain second pitch to a spectator.
spectic, how does adding breaks makes the car go faster?
Well, I think it's because most people now, if they're looking at the news,
will have seen examples of AI going wrong in one way or another.
And probably if they've been using consumer AI tools,
they will have actually experienced a hallucination, let's say, from an AI model,
a recommendation that isn't accurate or what have you. And that immediately makes people a little
bit nervous, particularly in a business context. If it's a consumer context and, you know,
your chat GPT or Claude or Gemini gives you a wrong answer, it may not matter too much,
depending on exactly what you're doing with it. But if it's in a business context, particularly
if it's in a medical or pharmaceutical context, wrong answers can have very, very serious consequences.
So people are, I think, understandably nervous.
And that's why putting in place an AI governance process is essentially giving you the confidence
to know that all the right checks and tests and controls have been taken when developing
and deploying the AI so that you are as confident as you can be that it is going to work
in the way that you intend.
Yeah, thank you.
knowing the risk and knowing how to handle the risk is important, right? Thank you.
Exactly, exactly.
So now we can talk a little bit more about some statistics we have seen globally and see how
does that relate to your book?
Right.
I guess recently, as we know, we have seen a lot of reports coming out.
AI has been such a hype this year.
Actually since 2002, right?
And we have seen lately more report comes out about.
AI adoption. Like the McKinsey January 2020-25 reports, they, CEOs are pushing for accelerating
AI adoption, yet at the same time, they got paralyzed by safety risks, just like what you
have mentioned earlier. But also, we saw other McKinsey report say that 75 of a company are now
using Gen.A.I. But only 25 have responsible AI governance and recent MIT report. You might have
or Red, say the AI Da Shen Rei, in the reality, those that really yield POTIOI is only 5%.
Right.
So the more we read about those reports, the more we are wondering, have we really reached the
infraction points?
Can we really find a way to control this?
Is there any way to balance the brake and accelerator dynamic that you mentioned in your
book. So the statistics that you described definitely reflect what I see in the marketplace when I talk
with organizations, although I do think it's getting better. So if you had gone back a couple of years,
the percentage of companies with AI governance in place would have been even smaller. It is growing,
which is very encouraging. And I am seeing the majority of companies that come to me,
or two companies that I work with, like Accenture, looking for AI assistance, they are generally
also asking for AI governance or responsible AI assistance at the same time because they recognize
the importance of building the two together. So I think we are seeing improvements. I think that the
early hype around AI, which came frankly from the consumer applications, chat GPT,
and then other, you know, similar chatbots that were released to consumers, of course they
were incredibly impressive. People started using them and they were, wow, this is amazing. It can do
these things. And as I think I mentioned earlier, the consequences of those types of systems
going wrong in a personal capacity were generally not too bad. When people started using them
then in a professional capacity, you probably remember examples of lawyers who used them to research
cases that then were fabricated that didn't exist and they went into court without checking their
facts. And the judge pointed out that this was not a correct case reference as just one example
of serious consequences from it going wrong. And I think it's that recognition that has slowed down
people's initial euphoria and made them think, actually in a business context, we need to be
quite careful here. And I think that's a good thing.
Thanks, Ray. I want to move to another topic that we talked about the governance products.
Can you explain in more detail why you believe that speeding AI adoption is actually
impossible without the brakes of a good governance? And what your advice to the healthcare leaders
who often faces those challenges?
Yeah, so I think what's really important is adopting a governance mindset from the beginning.
So what do I mean by that?
I don't think it's particularly helpful to have a long checklist of governance or compliance questions
which are answered at the end of the process.
So you have somebody who is considering how to use AI, they find the data, they train a model,
they developed their model, they tested, and then they come to somebody for a governance or a compliance check.
That generally doesn't work very well because if you find problems at that point, you have to go right back to the beginning in many cases.
That wastes a lot of time. It causes anger, frustration, cost and so on.
So the key is really building a mentality, a mindset of AI governance and checking throughout the whole process and training people,
and giving them the right tools and guidance and so on,
so that as they're developing their AI,
right from beginning and thinking about,
well, is AI the right tool to use for this particular question?
Is it the most appropriate thing?
Maybe there's actually a simpler deterministic software-type solution
that we'd be more appropriate
because people at the moment have a bit of a tendency to go for AI for everything,
even if it's not the best answer.
So right from deciding, should we use,
AI for this? Yes or no. And then what data set are we going to select and let's do the right
tests for buyers, for accuracy, for all these other things on the data set and then we develop
our model and so on. If you apply a responsible AI mindset throughout the process, when you get to
the end and you need to submit a report, let's say, for AI governance, it should be an automatic
pass because you've checked everything all the way through and it should be very easy.
And that builds trust then also between the different stakeholders within the organization.
Sexery, yeah, I agree.
Like the strategic planning and also the related actions will make it to do the work,
instead of just have a compliance list.
Thank you.
So it does things that having that governance mindset from the start is a,
key here. At least one of the survey has shown that in financial service, for example,
they have seen 57% of leaders identify responsive AI standards as a leading contributor to
our eye. It's significantly higher than those who point to generate AI as a technology self.
So responsible AI standards is a more important contributor to our eye than the technology itself.
So why is you saying that's the case and why specifically in financial industry we see this
the red line between standards and profit?
Is it simply a powerful ending fine?
Or is there really a so-called trust dividends here that this company are catching?
What did you think?
I do think it's a mixture of both, right?
So in financial services, the same as in healthcare, obviously regulation is extremely
important. And I think the purpose of that recognition is obviously to keep consumers safe,
but it's also to build trust because if a consumer knows that a particular industry or a product
or a service is carefully regulated, they're likely to have trust because they know some third
party has been checking it and making sure that it's safe and it's okay. So I think this point
about regulation and trust really go hand in hand. And I think that the financial services companies
that, you know, you mentioned from that particular report, what they've seen is that, yes,
they have to comply with the regulations, but actually trust is a key advantage in winning customers
in a competitive market and making sure that you're delivering services where you are transparent
about what AI you're using and not and really helping customers to gain trust in that
is going to be critical in the customer feeling comfortable when using AI in a product or a service.
I believe that should resonate very well with a lot of leaders who are listening to this episode
today from the healthcare and life sciences industry.
Yes.
We want to ask you about more questions about the trust barriers and challenges.
You mentioned before that trust is the currency of scaling AI.
Can you share an example from your experience to illustrate how a company failed to scale an AI pilot?
Not because the technology did not work, but because they couldn't prove it was safe to their customers or investigators.
If I may, I'm going to answer the question slightly differently because I like to look at positive examples rather than negative examples.
So let me give you an example which I'm involved in where I think the right approach is being
used in order to be able to scale the use of AI. So I sit on the AI ethics and governance board
for something called the local government and social care ombudsman in the UK. So this is an
organization that deals with people's complaints across the whole of the UK. Any complaint related
to local government or any complaint related to social care, so health care and so forth
within the community and the provision and allocation of healthcare services in the community.
And the team there recognized that AI could be very helpful in what they do, but they also
obviously recognize that this is a highly sensitive area and they need to be extremely careful
in how they use AI.
So the first thing they did was they established a set of principles,
and the number one principle is that humans make all decisions.
No questions.
Humans make the decisions.
The AI is there to help, but the human makes the decision.
Where they saw the big advantage of using AI is that the main thing that their professionals do
is they work through large amounts of information to review a case that's been
put forward, a complaint that's being put forward. That can be dozens, hundreds sometimes of different
documents in different formats, you know, word, email, PDFs, they're being sent in all kinds of
different ways, they're not organized or structured. That's a perfect use for AI to go in and just
sort the documents into which ones came from a doctor, which ones came from the complainant,
which ones maybe came from a council or whatever. What was the order in which these things?
took place. Let's put them into a chronological order. Let's give them a sensible title so that the person
reviewing can then easily, you know, refer to them and find them. And that use of AI can potentially
save quite a few hours of time at the beginning of an investigation. It's not something that's
interesting for the investigator to do. The investigator is an expert. They want to look at the facts.
They want to make a decision, not spend their time opening PDFs and trying to figure out the
data, the PDF and so on. But what they found was an understandable nervousness and lack of
understanding and knowledge on the part of all of their employees. And so they said they actually
mandated a four-hour in-person training for every member of the organization for AI. And they were
worried that people would be, would push back on that and would say, oh, come on, I don't have time for
four hours, I have to go in person and so on. And what I found really interesting was that
quite the opposite happened. They were oversubscribed for the first session. People were so keen to
attend the training. And they did a survey asking people before and after the training of
what level of sort of understanding and comfort did they have with AI. And before the training,
the average was around 20 to 30 percent. And after the training, the average was over 90 percent.
So I think this really illustrates how explaining to people about AI, giving them enough understanding,
not to make them experts to do their own coding or anything like that, but giving them the
confidence that they understand how to use it well, where to be careful, and so on.
And then you give them a clear set of guidelines or principles that tell them what you can
and can't do, like the human makes all decisions.
And in this way, you can, I think, very effectively apply AI even in highly sensitive areas.
ECM, podcast, and India for your informatics are available on Apple Podcasts, Google Podcasts,
Spotify, Stitcher, and other services.
If you're enjoying this episode, please subscribe and leave us a review on your favorite platform.
You mentioned some principles before including transparency and interval probability,
and those of the basic principles may gain users' trust.
Exactly, exactly.
Yes, so AI literacy is important.
Yet, if we look at this landscape we have now for AI standards,
no one can really keep up, though, there are so many different standards
out there, I was seeing the United Nations AI for Good Summit last year, have been
publicizing that.
And in your own book, you actually done a great job there to synthesize those global
regulations and centers, like the NEAS AI Risk Management Framework, the OECD Human Rights
Approach, the UAE ads, and the Singapore, MassFeeds, Prince of Post.
And you have this very beautifully made AI-Risance and Skaid
diagram. I'm afraid our audience cannot see here as they don't have your book in front of them.
But would you be able to maybe walk our audience through here, like mentally, how this
look like what are the core building blocks here, like every company, regardless geography,
should have in place when we are navigating this AI-send landscape?
Yes, yes, I'd be happy to. I mean, there are, as you say, lots of very good standards.
and frameworks and approaches there, we just felt there wasn't one single approach, which we thought
covered everything, and which was also relatively straightforward to implement, because the book is
aimed not just at very large companies with a lot of resources, but also at medium-sized
companies, smaller companies that don't have massive amounts of resources to build an AI
governance program. So I think it's important that you start with some principles. That's very easy to do.
There are lots of principles that you can borrow on the internet. My only comment there would be
make sure they are tailored to your organization. And I think life sciences companies are actually
particularly good at this. And then you want to translate those principles into an AI policy
or most likely you will look at existing policies that affect your operations and your use of data and so on,
and you'll want to review them and amend them to the extent that's necessary to incorporate considerations for AI.
Next, I think, is really important to have the accountability structure.
I mentioned this earlier, you know, the fact that you need lots of different people involved across the organization.
You obviously need the technical team, but you need legal, you need procurement, you're going to need risk, HR and so on.
So you need to be clear about who's accountable for what and how the different teams work together so that you're not duplicating, but there aren't also gaps between the way that the different teams capture things.
Think about privacy, data privacy, for example, it's a part of AI governance, but there probably will be a data privacy team already looking at that.
cybersecurity. It's another really important component, but again, there's a team already doing that.
So how are these different teams going to work together? And then we're going to need some training.
We're going to need some people who are dedicated to this. I see a lot of organizations setting up
a small central team who are AI governance or responsible AI experts, and they are helping to
design the program and then roll it out. But ultimately, you want this to be owned by everyone
across the organization. So you want to train across the whole of the organization and get the
teams that are executing in their day-to-day business to be owning and driving this. And then I think you need
some, let's say, sort of technical infrastructure, let's say. So it's really important that you are
able to create an inventory of where you're using AI. It may sound obvious, but if you don't know
where AI is being used in the organization, you can't control it. You can't check that it's working
right. And that's often not so easy, especially these days when AI is being developed by lots of
people with no code or low code tools and when many vendors are including AI in their products or
services. So actually finding what I call the checkpoints where you can ask a question, is this new
thing that you're proposing to do or to buy or to make? Does it include AI? And if so, then you need to
answer some questions. And those questions should be risk-based. So if it's a low-risk usage,
let's say you're using AI to help you write the menu in the hospital canteen. That's pretty
pretty harmless, right? We don't need to do a lot of checking for that. But if we're using AI in some other
medical application, then clearly we're going to want to do a lot more, a lot more checking.
You probably want some kind of workflow tool that will manage the checks and the approvals and
we'll document and record who did what and who approved things and so on. That's useful for a
regulator. It's also useful internally because if something does go wrong later, you want to be to go back
can see at what point did we make this change? What was the data that was used to train this
particular model on and so forth? And then I guess finally you'll want some reporting and audit
around it because you want to have visibility about how the system is working. And then just
may appear obvious, but at the very core of this is then an actual assessment of the AI solution.
So the rest of what I described is if you like, the sort of apparatus, the governance
program that makes it work, but then you will want detailed assessment questions for higher
risk AI, which dig into and uncover all the different possible risks and what mitigations have
been taken to make sure that those risks can be minimized.
Are there any other part of this or any particular AI risk?
You think that's worthwhile calling out?
Well, I'll mention perhaps another example of a company in the healthcare space that took what I found a very interesting approach. So the company, I can name it because they're in my book, but the company is Bupa, which is a private health in the UK and in other countries as well. And they started with a view that they would have a central team that would test and check all of the AI uses.
And they built a framework very much like I have described, but their plan had been to develop the assessment and then have that run by a central expert team.
But they realized after a while that that wasn't going to be feasible because of the number of AI use cases coming through and the budget that it would take to have a big enough central team to do that.
And so what they did instead was they built a self-service process, and they made that open to everyone in the company.
Now, they obviously protected sensitive information. That wasn't revealed. But effectively, what they did was they crowdsourced the checking of other people's AI models and of the assessment results.
So you as the team that was developing an AI solution, you had to run the tests.
There was a set list of tests and protocols about how those should be done and under what
circumstances you do what test, et cetera, which approach to use.
But then rather than the central team running the tests, the team developing the solution
ran the tests, but the results were all posted within this open system and anybody else
could look at them and comment on them.
And obviously it took a little while to get going and for people to get interested, but they found that actually there was a lot of interest and a lot of sharing of expertise and knowledge, a lot of helping of colleagues with suggestions about how to do things.
And what it also had as a sort of unexpected benefit was it shared best practice because some of the teams looked at it and thought, oh, we were thinking of building that ourselves.
We don't need to because they've done that in some other department.
we can work with them and we can borrow their solution.
And I found that to be a very effective way of doing things,
especially let's think about perhaps some smaller medium-sized companies
that don't have the budget and the resources to have a big central team doing this.
There are still ways that you can make it work.
Beautiful. Yeah, you always take a village to do this.
I would love to read more in a book about that now.
Thanks, Ray. You mentioned that learning from the prayer experience and the best practice, you give some examples.
And you also mentioned that there are the frameworks and regulations may be different between countries and companies and also from time to time, right, because the AI and technology are involving.
So we would like to hear more about yourselves with some specific framework about the early adopters.
such as Singapore's monetary authority was an early mover with their fit principles,
FEAT, fairness, ethics, accountability, and transparency.
So our question is, since the study earlier the most,
what lessons can US and European companies learn from the Singapore's experience
regarding what works and what doesn't and when enforcing those principles?
So I think my biggest learning from what the Monetary Authority of Singapore did was their very heavy use of pilots with a consortium of different organizations that they brought together.
So at each stage of the process, and they've been doing this now for, it must be four years, I suppose, developing and improving the frameworks.
and they had traditional AI originally, and then they had generative AI came along,
and so they revised things for generative AI, and now they've got agents,
and so they're looking at agents.
And what they've done each time is they have got together a consortium,
a group of companies that are involved in providing products and services.
So in their case, mainly financial services companies, banks and insurance companies and so on,
because they're a regulator for the financial services industry,
They brought in big tech companies, the Microsofts, Googles and so forth to collaborate,
and they brought in a large number of smaller specialist AI expert companies,
and particular companies specializing in AI assurance and AI technical testing and so on.
And they basically created a number of problem statements or questions and things that needed to be explored.
And then they were assigned across this group of collaboration.
organizations to explore and try to develop answers and then bring them back at the end of
the project, if you like, to be discussed in a bigger forum and to share the ideas. And I think that
that approach, which they've now repeated a number of times, is part of the reason that their
approach has been so successful, because it really is grounded in practical experience from
the companies that want to use the products and services, the technology providers, academia as well,
and technical experts, all working together to really explore the problem. And they've given them
access to sandboxes so that they can actually test models and data and things without being
concerned that they're doing something open on the market that might cause regulatory problems
and so on.
I think this approach has been extremely effective.
So how to operationalize from the principle level
to something that can work in the practice in the workflow directly,
it sounds easy in the time,
but oftentimes this is where things break.
And in your book, you also explore this a lot
with those struggle organizations who face this problem too.
are their most common failure mode, if we may code it that way,
that you have saying across companies when they are doing this,
to trying to take the principles, standards,
and implement it in practice in workflows, especially for engineers.
So I think, first of all, I would say this is definitely the hardest part, right?
The implementation is definitely the hardest part.
if your organization already has a data science workflow, a data science lifecycle, and increasingly
companies do, companies that are developing AI tools, data science tools, they will typically
have some sort of methodology, some process steps that people need to go through.
then embedding your AI governance questions into that existing flow is very effective.
Rather than trying to create some new process, if your engineers are already using a particular framework
and they know the steps, the stage gates, they go through the approvals, etc., then add to and build on that.
The biggest area of challenge that I see is actually different departments working together.
And sometimes you get duplication.
So I mentioned earlier, data privacy.
So people have to, in many markets, people have to do a data privacy impact assessment, DPIA.
And that has a lot of questions that may be the same as questions you might be asking in your AI governance questionnaire.
Well, that's a real pain for people if they have to answer the same question twice for two different departments in the organization.
And that gets people annoyed and then they want to try to avoid doing.
it and so on. So duplication is one area. The opposite can also be true that your AI team think
that the data team are doing all the data questions and the data privacy team think that the
AI team are doing that for AI models and then nobody does it because there's a gap because
they haven't properly coordinated. That's the biggest area that have challenged that I see.
and you can experience it with, let's say, procurement as well,
how procurement are buying in tools or assessing vendors,
but they don't have the technical capability usually to do the technical assessment.
So they have to work very closely together with the data science team,
borrow some help from technical colleagues to look at solutions.
Cyber security is another critical area.
There's a huge amount of interface and overlap between AI and cybersecurity.
It's an attack landscape for people from the outside to try to penetrate the organization, to pervert the AI models and so on.
So again, you have to have really close collaboration, careful working together between those departments.
Yeah, and that's why in US, NEAS is the leading institute for that, right?
They used to lead a cyber security center setting.
And now they are also doing the AI risk management framework.
Yeah, which is great.
Yes, exactly having them under the same umbrella.
Yes.
Yeah.
So you mentioned how to implement the principles to into practice,
like know how to and employment.
This is critical, right, for the success.
So you have interviewed executives at major global companies for your book,
governing the machine.
Do you have some examples that you can share with us,
particularly in the healthcare and life science.
Could you work us through one specific example or case study
where a company implemented the governance room map you proposed?
What did there before and after look like in terms
operational efficiency and risk reduction?
So let me take an example of a company that wanted to use
generative AI for its customer service agent, its customer interactions.
And they had previously had a purely deterministic rules-based system, and they wanted to make it sound better, more appealing, and actually give better answers and be much broader in scope.
And the challenge that they found was explaining to senior leadership how to deal with the fact that there will still be some residual risk remaining when you use generative AI.
The fact that generative AI is probabilistic in how it creates its answers, contrast that with a rules-based system where you use natural language processing to understand the question that the customer has put into the chat window.
But the answers always come from a set list of answers. There's only one list of answers and it's either going to be answer number one or three or five or 70 or whatever.
but the answers are fixed.
So you can't really go wrong with the answers.
You might not give the right answer to the question,
but you're not going to say something which is totally out of order.
When you have generative AI creating the answer,
drawing from a corpus of information that is checked and is careful,
there's always the risk.
However well you put guardrails in place and test, et cetera, and check,
there's always the risk that maybe the answer will be something
that you really wish that the customer service agent hadn't said. And what the team found was that
trying to explain that to senior leaders who had to approve the solution before it went live was
really quite difficult. And they came with test results, which showed that they had tested many
thousands of different permutations. They tried to break the agent, etc. And the success rate was very
high, but it wasn't 100% because occasionally something goes wrong. What they did as an interim
solution was they identified a certain set of questions or question types that they felt were so
risky that they didn't want to use generative AI to produce the answers. So effectively,
what happened is when the customer asked a question, there was an initial triage. If the
question fell into this bucket of very sensitive areas, then the agent would only draw from a
deterministic list of answers. If the question was a bit more straightforward and not as high
risk, then the full generative AI agent would kick in and would provide the answer. And what
their plan is is over time to gradually move more and more towards the generative AI agent.
AI agent as they get more examples of success, more comfort that the guardrails they put in place
really are catching almost all of the problematic areas. And so that was for me a good example of
an organization that had a very thoughtful governance process in place that worked with this team
from the beginning, from the conception, and took their senior leaders,
through the development process step by step, they knew this was going to be quite a jump for
senior leaders to accept this residual risk in the system. So they really tried to take them on
the journey through the development steps and update them along the way. And they also gave them
lots of opportunity for hands-on testing themselves, you know, trying to use the system, trying to
break it so that they could get more comfort with it. So we are coming close to the end of
our episode. But before we let you go, we would like to talk a little bit more about the future,
right? And we all know that in 2025, 2006, we can really go through them without talking about
agentic AI. You already hinting on some part of that. But just this week, we saw entropy
announcing color work, right, where now everybody can use AI to do things instead of to say things
and have chatbots.
Now, you can really demand AI agents to do things with you and for you.
And the risk profile must have changed dramatically for organizations who want to adopt this.
Right.
So, well, the framework you propose in the books to apply, is there anything that you see in this weight
of AI agents?
We need to know more.
Any new governance mechanism?
We need to start building when AI start doing things.
that are just generating content.
So clearly when AI can do things, as you say, new risks come into play and some of the
existing risks become bigger because it's one thing to have the AI system tell you something
and you make a decision about what you do with it.
It's a different thing if it goes off and does it without checking with you.
So there are new risks there and there are potentially bigger risks.
I think that the framework we propose in the book still very much holds, and we were very careful
when writing the book to think about making it as fit for the future as possible, because we know
this is a fast-moving area. We don't know what everything will, all the things that will change
in the next one, two, three years. But I'll give you a good example that I had when Chachybtee
was first released. I was addressing a group of lawyers. It was an event.
staged by a law firm and they had general counsel and senior lawyers from a large number of different
companies. And you may remember that just after Chat GPT came out, there were quite a few companies
that banned it completely within the organisation. And they got the CIO to put controls in place
so people couldn't access it through their browser and so on. And we talked quite a lot about this
at this particular event. The event was literally two months or three months after Chat GBT was released.
And we asked people, so how do you deal with chat GBT in your company?
And a group of them had put their hands up and said, well, we've banned it.
You know, we're not letting anybody use it at all in the company.
And when we dug a bit deeper into the difference between companies which had banned it completely
and companies which hadn't, what became clear was that the companies that had
some existing AI and data governance rules in place were the ones that were able,
relatively quickly to update them and make changes that would incorporate considerations for
chat GPT because they had a framework. They had something to build on. The companies that had shut it
down completely were the ones that had no AI governance or controls in place at all because they
suddenly woke up to the fact that they needed to put in place an AI policy, they needed
controls and checks, they needed training, all of which takes time if you've got nothing,
you know, from scratch. And so it's my belief that if you put in place a foundational AI
governance program and process, yes, it will need to change over time. The laws are going to
change. What your company does with AI is going to change. The types of AI are going to change.
but it's much easier to adapt something if you've got a foundation in place.
So I really think it's worthwhile putting in place the foundation now.
Don't wait for the perfect solution to come along because it's not going to come along.
And then you can keep an eye on things that are changing.
And as you both know, those of us in this field, we watch every day for changes because things are moving very fast.
but then you can go in and make appropriate changes when necessary.
Yeah, wonderful.
Yeah, that certainly speak to the future,
but also securing us with the foundation.
We can start with what we already have.
Exactly.
Yeah.
So any parting words you want to give to our younger generation before we close here?
Well, I think that AI governance, Responsible AI, whatever you like to call it,
I think it's a fascinating and really rewarding area of activity because you are at the same time preventing risk and you're also enabling people to use AI.
And particularly in healthcare, the potential for AI is immense.
The benefits that we can achieve are huge.
So I think that incorporating a responsible AI mindset into how you go forwards in your performance.
profession of healthcare, medicine, data science, whichever particular angle you're going to take,
is really important and really rewarding.
Thank you. Very nice to hear from you today, Ray.
We will like to have another episode, maybe in the future with you.
Maybe when you write another book or when you come back to New York.
I would love to do that. I've very much enjoyed the conversation.
Thank you very much indeed. And I'm sure if we wait for a little while,
there'll be a whole long list of new topics that need to be discussed.
Definitely. Thank you, Ray.
My pleasure.
ACM Bycast is a production of ACM's practitioners' work,
and AMIA's for your informatics is a production of women in AMIA.
To learn more about ACM, please visit ACM.org.
And to learn more about AMIA, visit amia.
Visit amyma.org and visit the community of women in AMIA.
For more information about this and other episodes,
please visit learning.acm.org slash B-Y-T-E-C-A-S-T.
For AMI-E-E-C-A-M-A-M-A-M-A-N.
With the news tab on amya.org.
This podcast was edited by Resonate Recordings.
