ACM ByteCast - Francesca Rossi - Episode 53
Episode Date: May 9, 2024In this episode, part of a special collaboration between ACM ByteCast and the American Medical Informatics Association (AMIA)’s For Your Informatics podcast, hosts Sabrina Hsueh and Karmen Williams ...welcome Francesca Rossi, IBM Fellow and AI Ethics Global Leader, and current President of the Association for the Advancement of Artificial Intelligence (AAAI). Rossi works at the Thomas J. Watson IBM Research Lab in New York. Her research interests focus on artificial intelligence, especially constraint reasoning, preferences, multi-agent systems, computational social choice, and collective decision making. She is also interested in ethical issues in the development and behavior of AI systems. She has published more than 200 scientific articles in journals and conference proceedings and is a fellow of both AAAI and EurAI. Rossi has been the president of the International Joint Conference on AI (IJCAI), an Executive Counselor of AAAI, the Editor-in-Chief of the Journal of AI Research, and serves n the Board of Directors of the Partnership on AI. She has also served as a program co-chair and steering committee member of the AAAI/ACM Conference on AI Ethics and Society (AIES). Francesca shares how experiences with multidisciplinary work in computer science drew her to AI and ethics, and the challenges of synchronizing with people from a variety of different backgrounds at IBM. She also talks about her involvement in the development of AI ethics guidelines in Europe. She walks through some of her concerns around building ethical and responsible AI, such as bias, lack of availability, transparency of AI developers, data privacy, and the accuracy of generated content. Francesca emphasizes the importance of researchers working more closely with policymakers and the important role of conferences such as AIES (a collaboration between AAAI and ACM). She also offers suggestions for those interested in getting more engaged in AI ethics and recommendations for people interested in an AI career path, and advocates for common benchmarks that can help evaluate AI.
Transcript
Discussion (0)
This episode is part of a special collaboration between ACM ByteCats and AMIA For Your Informatics Podcast,
a joint podcast series for the Association of Computing Machinery,
the world's largest educational and scientific computing society,
and the American Medical Informatics Association, the world's largest medical informatics community.
In this new series, we talk to women leaders, researchers,
practitioners, and innovators who are at the intersection of computing research
and practice to apply AI to healthcare and life science.
They share their experiences in their interdisciplinary career paths,
the lessons learned for health equity, and their own visions for the future
of computing.
Hey, hello, and welcome to the ACM AMIA joint podcast series.
This joint podcast series aims to explore the interdisciplinary field of medical informatics
where both the practitioners of AI ML solution builders and the stakeholders in the healthcare ecosystem take interest.
I'm Dr. Sabrina Jade from the Association of Computing Machinery, BICAST series.
And co-hosting with me today is Dr. Carmen Williams from Your Informatics Podcast with the American Medical Informatics Association.
We have the pleasure of speaking with our guest today, Dr. Francesca Rossi.
Thanks. Thanks for having me.
Thank you for joining us. Francesca Rossi is an IBM Fellow and an IBM AI Ethics Global Leader.
She works at T.J. Watson IBM Research Lab in New York. Her research interests focus on
artificial intelligence and ethical issues in the development and behavior of AI systems,
in particular for decision support systems for group decision making. Dr. Rossi has published
over 200 scientific articles in journals, in conference proceedings, volume between conference proceedings,
collections of contributions, special issues of journals, and a handbook. She is a fellow of both
the Worldwide Association of AI, AAAI, and the European one, which is EuroAI. She has been the
president of the International Joint Conference on AI, an executive counselor of AAAI, and the editor-in-chief
of the Journal of AI Research. Dr. Rossi is involved in multiple AI committees, both in the
U.S. and Europe, and we are so excited to have the chance to speak with her more today. So thank you
again for joining us. Thanks. Yeah. So Dr. Rossi, you have been working in this interdisciplinary field between computer science and science, that was the first decision there were not many computer science curricula in the various universities. So I felt it was something for the future that I was doing something new and that was
exciting for me. So that's what computer science. Then in my master thesis, I decided to do it on
AI and that's where I started working on AI and I continued and I still am working on AI. And that also was an additional thing.
It was an additional feeling that really this science,
more, you know, a technology, but also the science of AI
was really allowing me to build things where they were new,
they were shaping the future.
So even more than when I chose computer science.
And then after many, many years working in AI and teaching and doing research with my
students and so on, I was a university professor teaching and researching in AI for more than
20 something years.
Then I went one year on sabbatical at the Radcliffe Institute for Advanced Studies at Harvard University.
And that place is a really a crash course in multidisciplinarity because during the year that people spend there every year, there are about 50 fellows at the Radcliffe Institute and they all come from different disciplines. So I was the only computer scientist there.
And then there were people covering all the other sciences, all the arts, and all the
humanities.
And then the staff of this institute forces these people to work together, to spend time
together.
I say forces because it doesn't come very natural at first, because these people are used to work only with the people similar to them,
but really it was a very, very learning experience.
And in communicating and working with people that had different backgrounds,
different questions in their mind,
and that was where I started to think not only about the technical and scientific aspects of AI, but also about the impact on society.
And that's why when I started thinking about the ethics of AI and the impact on people, on society, the way we live, the way we interact, the way we work.
So after that sabbatical year, instead of going back to my university professor role, I decided to join IBM.
And then since then, I increased inside the company than just about the technological and scientific aspects of AI.
Thank you. And you talked about some of those interdisciplinary challenges, such as forcing everybody to force together.
And so were there any other challenges or how did you overcome this challenge? Well, one initial challenge, if you want to call it challenge synchronize and do some translation between the languages
in order to be able to then be effective in working together. Because if you don't even
understand each other, and I found this during that sabbatical year, but then I found this
challenge also later on. For example, when i was for two years in the european commission
high level expert group on ai this is a group was a group that worked on 2018 to 2020 for two years
to define the ethics guidelines for ai in europe and these 52 people were very different. They were experts of AI, but less than 10.
And then there were civil society organizations, consumer rights organizations,
and so philosophers, psychologists, and also sociologists, and so on.
And so the mandate of that group was to write these ethics guidelines for AI Europe. We realized at the first meeting that these 52 people basically had 52 ideas of what AI was.
So the first thing that we needed to do is actually publish an additional document that really was saying this is what AI means after discussing among these 52 people.
And then we could start thinking about the ethics guidelines for AI Europe.
So that was the way we solved the challenge there.
But the challenge was also within IBM
because when we started building the governance
around the AI ethics
and involving all the divisions of the company
in the activities around the AI ethics,
of course, these people were very different
in different ways.
They were AI researchers, they were people in marketing, communication, they were the
legal people, they were people doing products, software, and so on.
So all very different people.
So the first thing we did was to build a glossary, a glossary of terms around AI and AI ethics and with their
definition so that everybody could look up at the glossary in case there was any doubt,
you know, and that was the agreement that these different people had reached about the
terminology.
So the terminology phase is initial one.
Of course, it can be evolved because terms are added and so on.
But that is something that needs to be solved.
Otherwise, you cannot work well together in an interdisciplinary environment.
Yeah, I can totally resonate with what you say here.
It sounds a lot like what we are experiencing in our own company as well yeah but i want to take
it back a little bit to also understand so this kind of translation between interdisciplinary
field is certainly difficult right but and challenging but you managed to do it well
is that what leads you to your current role or is there anything else you will say that inspire you to pursue
the current career path?
Yeah, well, when I joined IBM, it was also a moment in time where there was a lot of
initial discussion about these issues with AI, you know, the first algorithms that were
shown to be making discriminations, for example.
So there were the first discussion coming up, you know, what does it mean that this technology is beneficial to people, to society and so on. fortunate enough that during my sabbatical year before joining abm i i joined the advisory board
of the future of life in silo that was very pioneer in convening these discussions about
the ethics of ai so then when i joined abm i started also a multi-division discussion group
around these issues but it was very kind of all over the place at the beginning, right?
And then this discussion group was transformed into a first version of the AI ethics board
inside the company. So an initial version of the governance around AI ethics, where we defined
our principles, we defined some partnerships with external organizations. We defined some activities, but then it was not a decision-making board.
It was mostly a coordination and discussion board.
And then a few years later, it was transformed into a really decision-making governance body,
which makes decisions about not just the principles of
a high-level thing, but also about the concrete activities that the company has.
So it was a natural, I think, phases, one after the other one, that led to more and
more concrete, from very high-level principles to very concrete actions.
Yeah, and I think you've already talked about this quite a bit,
but we know that many industries and companies are already, they have established their responsible
and ethical AI guidelines. We see the new kind of coming in of generative AI and the additional
risks that could come with that. And so what do you see as the current status in enabling ethical
and responsible AI?
And then if you're going to score, you know, for progress collectively as a field, how would you rate it?
Yeah, so as I said, you know, these phases happen not only in my personal experience, but I think in many other organizations.
So the first phase was mostly about being aware of the issues that were, you know, with the uses of AI.
And then the second phase was principles.
Everybody published principles. I think that Harvard was a project called the Principled AI Project and collected all the principles that were around AI ethics that were published by any kind of organization.
There were more than 100 sets of principles, many of them overlapping,
but from different angles, you know, from companies, governments,
universities, multi-stakeholder organizations, so on.
So first phase awareness, second phase principles,
and then the more practical phase, you know.
So, okay, so how do we implement this principle?
How do we integrate
the implementation of this principle inside our own organization and i think that these three phases
and we are still then in the practical phase implementing the principles into concrete actions
whether these actions could be like risk case assessments, processes, or educational material for everybody, the developers and everybody else, or software tools, or even diversity in the composition of the teams, or the governance bodies like the IIT sport that we have. So all these pieces of very concrete actions
came out of these principles
that were very high level
and principles are very nice.
They said, you know, like a North Star,
but developers need something more concrete
than the principles.
So, and of course, as you say,
with the evolution of the technology,
like with machine learning
and then generative AI and so on,
these actions need to be also updated over time.
That's why, for example, recently we published a very large table of foundation model risks
and comparing them to the risks that we already were aware of for traditional kind of machine
learning approaches
to see which ones remain the same with generative AI, which ones are amplified and which ones are
completely new. And so they need new tools or updates on the tools, new educational material,
new consultations and so on. So I think that I saw, I mean, it's like I saw a continuous trend in increasing
these activities and also increasing the number of people that work on this inside the company,
as well as the resources. And not only increasing, but increasing not in a linear way,
but a kind of an exponential way. You know, beginning, it was maybe going a little bit slow,
and then, boom, it grows exponentially very rapidly,
also because there are forces that push in this direction
that come, for example, from the point of view of IBM,
come from, yes, governments that are increasingly generating laws about AI,
but also clients themselves that want to know what we do
for their solutions that we provide,
and what we do about these AI ethics issues.
So governments, clients, media attention,
so a lot of internal but also external forces that push and make this area grow.
Yeah, so there are so many areas we can grow here, right?
And certainly we never have enough people on this.
We see the need for this everywhere we go.
With so many AI regulations coming up on the horizon as well. Certainly, a lot of people
are being kept awake at night. We're wondering that for you as particular in your capacity as
the current president of AAAI, right? So what are those issues that keep you awake at night?
And are there particular regulation or particular
risks that you worry the most?
And also the follow-up question for that will be like for professional societies like us,
like AAAI, like ACM, what can we do, right, to gather upon together?
So as we said, you know, the technology evolves
and brings about, and the uses of these new techniques in AI,
they bring about additional opportunities,
but also additional questions about,
and legitimate questions about possible risks in the uses.
So, for example, for generative AI,
so for classical machine learning, of course,
there are the usual issues related to fairness,
so the presence of bias,
so the ability to detect and mitigate bias
in an AI system, in the training data, and so on.
But also sometimes the lack of explainability,
so it's kind of a black box,
the system that produces an output,
but we don't know how it gets the output from the input.
The transparency issue that where, you know,
those that produce the system need to be transparent enough
to say how they built it.
The privacy issue, of course,
because machine learning relies on data.
So all the data issues that are including the privacy ones. And then with generative AI, of course, because machine learning relies on data. So all the data issues that are
including the privacy ones. And then with generative AI, of course, the issues related
to the content that is generated. So issues related to misinformation, because we know that
generative AI is extremely excellent in writing text or generating images that are very realistic and
even text very perfect from the syntactic point of view. But sometimes it makes some mistakes,
factual mistakes. So we need to be aware of that if we want to use it. The fact that it makes these
mistakes, it doesn't mean that it's not usable and cannot be useful in many scenarios, but we need to be aware of that in order to use it in the most appropriate way.
So if you ask, for example, for factual information, it's better, but they are not related to limitations or mistakes of the AI system, but actually on how humans decide intentionally to use these AI systems, such as, you know, deep fakes, you know, and that can impact our society by manipulating, you know, public opinion and so possibly disrupting even the
democratic process, the elections, the candidates.
And so the impact on society, as well as the impact of these AI systems in the education
system, for example.
We know the students are using some of these systems to bypass the learning process,
to write their own essays, you know, and that's something, you know,
that should be resolved.
Even in the research community, and I see that at AAAI,
there are people that write some parts of the papers through these technologies
or even parts of the reviews of the papers through these technologies or even parts of the reviews of the papers through
these technologies.
So we need to be aware of possible actors that may use inappropriately these tools because
the tools should be used, but should be used in the right way.
So it is to augment our own capabilities, to help us grow and do things that are higher level than what we
were, we could do earlier and not to bypass any process that can be helpful for us to
grow.
So again, so there are several issues, but most of the issues are, in my view, related
to two things.
One, that the technology still has limitation,
has capabilities and limitations.
So the typical case is this hallucination thing.
This is a limitation of the technology.
Maybe in the future,
by building the technology in different ways,
maybe we will overcome this thing, okay?
But for now, we have to be aware that this is a limitation.
And then other risks are related to the risk of how we decide to use the technology.
So whether we decide to use it in a completely fully autonomous way when it's not the right
scenarios to use it full autonomously, for example, and we need to have human oversight
or human in the loop.
So these are the two things,
limitations of the technology and misuse, I would say.
Yeah.
So are there things you feel that professional society can do a little bit more on top of that?
Yeah, well, professional scientific societies
like ACM, AAAI, and EMEA,
they can certainly raise the awareness.
So, for example, even in the mission of AAAI and EMEA, they can certainly raise the awareness. So, for example, even
in the mission of AAAI, the mission
of AAAI is to advance
AI research and
the responsible use of AI.
So it's already in the mission since
the start that
it's not only to advance the technology
per se,
that's not the ultimate goal, but the
ultimate goal is to make sure that it's used responsibly.
So awareness.
Also, another thing that AAAI is doing is also to consult with policymakers.
So to help policymakers understand, and this can come from scientific and professional
society, but it can also come from companies, from others that are more technically involved.
So to help policymakers understand what is the best way to regulate AI, if there is a
need to regulate it and how, you know, and so that's why I think what the European Commission
did with the European AI Act was very wise because it really started with these multi-stakeholder consultations
where people from different areas, including professional scientific societies, could say,
okay, I think from my expertise, from my knowledge of AI, I think this should be advice about
how to regulate AI. So that's another area where this association can do.
But of course, being an association that organizes events,
like conferences, like you mentioned,
several conferences, AAAI has conferences,
symposia, and many other things.
I think that to raise the awareness
of the researchers themselves,
that AI is not a science and technology only
at this point, but is a social scientific technological field.
So to help researchers be exposed also to the social aspect and considerations that
they need to make whenever they do their research. For example,
at AAAI, whenever people submit a paper to the conference, we ask them to write a section in the
paper to discuss the possible ethical implications of the work that they are doing. So that is
something to raise awareness in research research because the next generation of researchers
will have to be more multidisciplinary, more social, technical than what we had in the past.
Then the way I was educated, for example. Yeah, yeah, totally with you. Yeah, in EMEA,
we have been hosting the EMEA showcase for three years in a row, right? And we do that to ask all the submitters of the AI evaluation for their ethical consideration
to be added in different stages of their health AI evaluation.
So, yeah, thank you for bringing that up, that really important point.
And also for that multi-stakeholder view, One thing that I think both ACM and EMEA has been doing is to bring awareness
and bring more multiple parties on the table together to generate consensus from community first.
So another thing that actually AAAI is doing together with ACM is to co-organize a conference called AI Ethics and Society,
which is exactly this attempt to put together in the same place, in the same event, in the same conference, researchers and scientists from different disciplines.
So this conference, for example, has four program chairs, one from AI, one from philosophy, one from psychology,
one from policy.
So, and then the papers and the papers, posters and invited talks and so on are all very
multidisciplinary.
And so that was a way for ACM and AAAI to really get together in this multidisciplinary
way, not just as technology associations.
Right. Yeah. And in this multi-stakeholder view, did you find any particular party's voice
is lacking? At least in AMIA, what we find is a patient's voice is often missing on the table,
right? That's why we need to put in actual effort to bring them in.
Like last year, we had this conference with Harvard Medical School
to make sure that we had a patient out of the kids.
Yeah, that's a good point.
Of course, in that case, it's patients,
but in general, it's the communities that are impacted
by whatever these researchers or developers are building, right?
And this is true that usually this voice is not too present.
It may be present, for example,
I've seen even at this conference,
AI Ethics Society, that happens every year.
Last time I've seen a lot of papers describing studies,
for example, where they were asking, even some of them
were related to healthcare, they were asking the impact of AI on the doctors, on the patients
and so on.
But the conference itself did not have these communities present there.
Some of the papers were describing the results of these surveys, of these consultations, but it's true that those communities themselves
were not present at the conference.
So they were present only because they were in the papers
as being part of the study described in the papers.
So you're right that most of the people at this conference,
for example, are those that produce the technology or produce the solution based on the technologies.
And then they also do their own study at home and then they come to the conference and present the result of the study.
But we don't see a lot of that community that is impacted by the work. ACM Bycast and AMIA FYI Podcast are available on Apple Podcasts,
Google Podcasts, Spotify, Stitcher, and other services. If you're enjoying this episode,
please subscribe and leave us a review on your favorite platform.
Wonderful. Yeah. And then in your various roles and very much so a pioneer in this area, are there any changes you are proud that you've been able to facilitate? And then for that matter, are there any new changes that you would suggest? IBM that we did inside the company, having the idea of building this internal board,
that's something that I really feel that it was the right thing to do.
And in fact, that was started all these very concrete activities, which I did not do by
myself, of course, with a lot of other people and the companies and other divisions putting resources and so on.
But this initial idea of really the need for governance,
centralized governance inside such a big
and global organization like this company,
I think it was the right thing to do and to pitch
and to be insisting at the beginning inside the company to build it.
And because, again, in a global company, it's important that this governance is centralized,
although it's very connected to all the divisions and to all the different roles.
But it's important that it's centralized because, again, we cannot have different standards
in different countries.
A company that has offices everywhere in the world,
basically, you want to have the same principles
and the same standards of risk assessment,
evaluation of or use of the tools,
thresholds for bias, whatever,
all over, whatever the company company does all for the law,
and then when the law is there, you have to comply, of course, but really being very proactive and saying, this is what I want to do, this is how I want to behave with this technology,
this is what I want to do, no matter whether there is a law that forces me to do that or not.
And that, I think, is implicit in this initial idea that I had of having this centralized
governance like the AI ethics board.
Yeah, and there is a new generation of informatics, computer scientists who will be listening
to this podcast episodes, who will be interesting in diving more with you on this area
are there any suggestions for where they should start and we are well first of all there are at
this point there are many books on ai ethics ai and also ai ethics there are really many books
that tackle these issues from different angles.
So, I mean, there are really many, many different books that one can read.
But even just going to these conferences,
like the one that I mentioned, the AI Ethics and Society,
and because at these conferences, people can get an idea,
you walk around, you talk with people.
Usually at this conference, there are paper presentations, invited, but then there are poster sessions.
So in the poster sessions, you can go up to the authors, ask, discuss.
So you get an idea of what people are doing and what are the main issues that are being addressed.
What are the main solutions that people put in place? And so that is a nice way to be engaging with that community of people,
that multidisciplinary community that thinks about these issues
and has concrete solutions, puts in place concrete solutions for the AI ethics.
But also that you find also companies at these conferences.
At AAAI, for example, there are usually no companies.
So if you have in mind to change your career or do something and see, you get to see the
internship opportunities or even the full position opportunities that are around that
area.
Yeah.
And you've given such a really good insight on this.
And so stepping back a little bit, reflecting on your own career,
what might be some of those things that early career moves that you made that
you, when thinking back, you're like, Oh, this will be really useful.
I could recommend this for newcomers. And then for that matter,
even mid career, what were some of the things that you did? So what would be your advice for those who are exploring maybe different career paths
in this area? So again, my early career, it was not very multidisciplinary. Even mid-career,
it was not multidisciplinary because I had more than 20 years in academia where I was doing AI and I was only talking to AI people.
And that, of course, is very much, is very nice because you are in your comfort zone.
So you are there, you know, you talk to people that completely understand you.
So it's very comfortable being in that position. But then I would, you know, then when I decided to go a
little bit out of my comfort zone with the sabbatical here at the Radcliffe Institute,
that was really, you know, an eye-opening thing. And so it was maybe more challenging than remaining
in my comfort zone, of course, but it was really an eye-opening not only eye opening to me, but also opening a lot more possibilities and opportunities to me that were not in the landscape before that.
So the only suggestion that I can give is really to not be afraid to experiment and to go out of the comfort zone and to work or collaborate with people that
are very different because again it may be a bit challenging at first but then it's really very
rewarding and you really feel that you can have an impact outside your vertical and so you can
you can really i mean have an impact also on other disciplines or learn
also a lot because if you keep talking to the same people or the same kind of people
yeah you learn because that but you can learn much more if you start talking and working with
people that are very different from you so that was the and and, and of course, in my case, I did not do that for many, many years,
no? And, but then when I decided to do it, I said, Oh my God, why didn't I do it earlier? No?
So, but it was the right point, the right moment in time. But again, I think that not necessarily
like I did it, that I moved from academia to a company from Europe to US.
I mean, it doesn't have to be that drastic change,
but to really go out and be aware of the fact that even if you are working in a technology,
the right stakeholders are not just technological people.
There are many stakeholders of that technology that are outside the technology
and they are societal stakeholders. And so the best way to advance the technology and the science
is to consult with these other stakeholders. And that's one of the reasons we started this
podcast series that we want to start exposing our new generation to this kind of interdisciplinary
field so they can learn how to speak the language differently and to be more brave as what you have
experienced to step out of your comfort zone yeah so i remember for example that few until
several years ago it was okay at some, for example, at the triple
AI conference or some other AI conference, it was okay that some author presenting his
own paper, technological paper, you know, about AI.
I remember that people were asking, oh, but what about the societal impact?
And they say, oh, I'm just a researcher in AI.
Somebody else will take care of the societal impact.
So this maybe was okay many years ago.
Now in the last 10 years,
it's clear that this answer is not okay anymore.
It's not acceptable.
So you are a researcher, you do the technology,
you do the science,
but you need to be aware of the potential societal impact
and be able to answer the question that related to the societal impact.
So that's an evolution that really is very important.
Right.
And that leads us back to our next question, which is coming back to our discussion about
professional societies a little bit, right? So in this intersection,
we definitely all been experiencing the need of talking to people in
different fields, in AAAI, in ACM, in AMIA.
So are there particular activities that you feel that will have great
potential for this different professional society that they seem to be
touching on different audience, but in the end of the day, might be really the same group of people
in the future, right? How do we is for together? Yeah, no, I think that there are many things that
can be done together, joint events, joint, not necessarily full conferences, but even joint events, parts of the conference.
For example, AAAI has an annual conference with more than 4,000 or 5,000 people, but
then it has also spring and fall and summer symposium, which are kind of workshop-like,
much more informal and smaller events.
So those could be joint.
Also, within the AAAI conference, there is a program that we
started a few years ago that is called the Bridge Program that is exactly to build bridges between
AI and other communities, AI and the vertical sector, AI and something else. So that could
also be a place where, you know, it could be joint. One of the bridge, let's say, events could be about AI and ACM and MI,
you know, doing something together related to health informatics
or anything related to that.
So there are many activities that can be done.
For example, there can be even joint issues. For example, AAAI has a magazine
called the AI Magazine that is available to everybody, also
non-AAAI members. And this magazine has issues
with various articles. There could be each issue as a
team. So there could be a team that is
joint interest around AI and healthcare or
other topics. So from the point of view of AAAI, I think there are many things that
can be done jointly by these professional and scientific associations.
Yeah, that's very helpful. I think even thinking how AI we know is not new to like computer scientists,
definitely in natural language processing. And so we've seen it tried it over and over again,
maybe for the last 40 years. But lately, we see there's maturity of infrastructure support to
large scale computing. And even the availability of pre trained models definitely has increased
the access to things that we previously were maybe
like research only technology. So this has led to many real world applications, definitely in
healthcare as well. And so in your mind, what is the most important AI application thus far? And
then what do you see that's coming up that we're like, this is really important. So first of all, maybe I'm sure you are much more experienced than me in understanding
what's the best application so far of AI in healthcare.
But I would say anything that requires the analysis and interpretation of large amounts
of data, this is what AI and machine learning techniques are very good at, in, again, supporting the activities of the health professionals
in making decisions based on a more sophisticated analysis of the data.
So whether it's radiology or whether it's something else,
you know, whenever there is a lot of data to be analyzed,
I think that AI can help.
Again, being careful about what should be automated and what should be instead given
as knowledge from data, knowledge to the doctors, to the health professionals, so they can make
more informed decisions. But for the future, I think that healthcare is still done in a very siloed,
vertical way, just like education in some sense. Education, at least when I studied,
and I think in Italy still, when people study at the university, they either study informatics or philosophy
or psychology.
I mean, they're very vertical things.
And I think that the way the health of a person is still handled is a bit like vertical.
Now you have a specialist in this area and then especially this other area and especially
in this other area.
So I think that AI can help to have a more holistic approach
to healthcare that I have the impression that could help
in higher success of resolution of issues in the health of a person.
So a more holistic approach where data coming from different sectors,
disciplines within the health of a person
can be combined to get a better idea of what can be done about the health of a person.
And then, of course, there are all sorts of even preventive closed loops or closed loops approaches
where AI can be combined with neurotechnologies, for example, that can read data from sensors,
but they can also write data into the central nervous system, for example.
I think, for example.
Now the data collected, AI that interprets them, predicts that there will be a seizure.
And then the new technology that injects some substance to mitigate that,
or a similar one for treating or mitigating Parkinson's tremor.
So this closed loop where AI is combined with other technologies,
for example, neurotechnologies, to help in a very ongoing, always ongoing sensory information analysis and then intervention.
So that to me is a very interesting avenue.
The future can be expanded from these two examples that I gave.
And whether it's healthcare or not, right, so this has to be evaluated before they can put into the real-world scenarios to be practically used.
So I'm wondering, in terms of evaluation, did you see any important pieces there that should be included in your mind?
And since there are so many guidelines out there
right so sometimes yeah of course i mean i mean the more we are using these technologies
in making decisions that are affecting people's life like health care financial well-being you
know a very important aspect of the well-being,
overall well-being of a person.
And the more we have to be careful
about the technologies that we employ,
we have to vet them,
we have to evaluate them over common benchmarks,
for example, that are related to the scenario
where you want to deploy these technologies.
The risk assessment framework, for example,
NIST in the US has developed a risk assessment framework
in general, not just for specific sectors, for AI.
And this is something that can be standardized
to agree on what is the right way to evaluate,
whether it's benchmark, whether it's standards,
whether it's some thresholds in the risk assessment.
So the evaluation, whether it's internal or external red teaming,
there is a lot of discussion right now about red teaming,
large language models, and so on.
And I think that this agreeing on what are the right benchmarks
or what are the right ways to evaluate is very important.
Many years ago in AI, this was not very common,
not to have an agreement.
The first time that researchers in AI agreed on some benchmarks
to evaluate the AI system on a common set of benchmarks
is when the ImageNet database was put together.
Because for the first time you say, oh, okay, everybody working on image interpretation
should tell me how the system behaves on that same database or the data set of images.
Then, of course, we discovered
that there were issues with this data set.
But the point of having a common benchmark,
a common data set to use to evaluate,
that's very important.
Yeah, and that set up the success
for Transformer and many now technology
transforming inventions
that we have seen in this field.
Yeah, so there are many.
So, of course, that ImageNet is less used now,
but there are more and more benchmarks to evaluate the capabilities of this AI system.
Of course, we have to be kind of careful to evaluate over the right benchmarks.
For example, I've seen a lot of evaluation of large language models over benchmarks that
we use to evaluate human beings.
For example, the bar exam or the admission exam.
But of course, those are fought for human beings, for the capabilities of the human
brain and not for machines.
So, for example, machines are very different in capabilities.
For example, they have an almost infinite memory because I can add as many things I want and almost perfect memory.
This is something that we don't have as human beings.
So I don't think it's very appropriate to use benchmarks,
the same benchmarks that we would use,
the same tests that we would use for human beings.
Yeah, that's a good point.
We should also test machines for reasoning and planning, right?
Which humans are best at.
Yeah.
And did you see any industrial standards emerging at all?
Well, in AI and AI ethics,
there are already several initiatives
from many standard organizations.
So I think the first one to have standards
around AI ethics, not just AI, is IEEE.
IEEE is a global association that also has a standard part.
For example, IEEE is the organization that defined the global standard for Wi-Fi.
The reason why we can go everywhere, everywhere in the world with our computer and plug into
our Wi-Fi is because everybody uses this.
So IEEE is a very active standard as an organization and they had put together
already several years ago a program of standards called the P7000 program with about I think 13 or
14 standards all about AI ethics. One is about fairness, one transparency, one robustness, one, I don't know, value embedding.
I'm involving one of them called AI nudging.
Yeah, okay.
So that's really the first time
that I saw standards explicitly around AI ethics.
But of course, there are standards on AI coming up
and already finalized, like from ISO.
But also there will be a lot of standards coming up in Europe
from CENCENELEC, which is the European official company,
organizations that define standards that can be used
within the European laws.
So now that the European AI Act has been approved
and with some months and years will be then used all over Europe,
then the standards will tell, the standards produced by these organizations and CLNF will define how this law will actually have to be implemented by the various companies or users and so on.
So many standards will come up very soon from that organization.
Yeah, and will that lead to self-regulation more
before AI regulation comes up,
or you see this all together as one effort?
Well, again, in Europe,
the regulation has already been approved
by the European Parliament.
Now, the regulation also says
that in six months or one year or two years,
it has to be implemented,
parts of the regulation with different timings.
So the standards now in Europe,
they have to work very fast
because by that time,
when the regulation will have to be applied, the standard has to be in place.
Because otherwise people won't be able to know what to do to actually be compliant to the regulation.
And that's a standing US, right? So standards have to catch up.
Right, right. Yes.
But I think there is a role also in the, not just the standards for Europe or US or other regions, but also in these global standards, no? Like the IEEE standards are not for a specific region, they are global, same ISO standards. So it's very important to have interoperability, no? Over globally, not just within a territory. Wonderful. This has been such an informative and exciting conversation about AI.
And I know I'm most excited to see
the three strongest professional societies,
AAAI, AMIA and ACM,
really partner up and have a great collaboration.
And with that, are there any parting words
that you'd like to share with us?
No, but I'm really looking forward
for these three societies to collaborate on these
topics and to do actual things that can be impactful, inspiring to all the members and
beyond the members of these organizations. So, but not only inspiring, but also very impactful
in concrete ways. And thank you for having me. It was a nice chat and I'm looking forward to the next steps.
Thank you.
Thank you for listening to today's episode. ACM Bycast is a production of the Association for
Computing Rationaries Practitioner Corps. And AMIA's For Your Informatics is a production of Women in AMIA.
To learn more about ACM, visit acm.org.
And to learn more about AMIA, visit amia.org.
For more information about this and other episodes,
please visit learning.acm.org slash b-y-t-e-c-a-s-t. And for AMIA's For Your Informatics podcast,
visit the news tab on AMIA dot o-r-g.