Microsoft Research Podcast - 105 - Responsible AI with Dr. Saleema Amershi
Episode Date: February 5, 2020There’s an old adage that says if you fail to plan, you plan to fail. But when it comes to AI, Dr. Saleema Amershi, a principal researcher in the Adaptive Systems and Interaction group at Microsoft ...Research, contends that if you plan to fail, you’re actually more likely to succeed! She’s an advocate of calling failure what it is, getting ahead of it in the AI development cycle and making end-users a part of the process. Today, Dr. Amershi talks about life at the intersection of AI and HCI and does a little AI myth-busting. She also gives us an overview of what – and who – it takes to build responsible AI systems and reveals how a personal desire to make her own life easier may make your life easier too. https://www.microsoft.com/research Â
Transcript
Discussion (0)
We're trying to make sure we think carefully and thoughtfully about how to build these systems.
So in that sense, we might need to slow things down.
But we're also trying to push the boundaries, right?
That means like coming up with new techniques so we can then accelerate our progress.
What are the new methodologies and techniques and tools we need to build so that we can still make rapid progress, but do so carefully. You're listening to the Microsoft Research Podcast, a show that
brings you closer to the cutting edge of technology research and the scientists behind it. I'm your
host, Gretchen Huizenga. There's an old adage that says if you fail to plan, you plan to fail.
But when it comes to AI, Dr. Salima Amershi, a principal researcher in the Adaptive Systems and Interaction group at Microsoft Research,
contends that if you plan to fail, you're actually more likely to succeed.
She's an advocate of calling failure what it is, getting ahead of it in the AI development cycle, and making end users a part of the process. Today, Dr. Amershi talks about life at the intersection of
AI and HCI and does a little AI myth-busting. She also gives us an overview of what and who
it takes to build responsible AI systems and reveals how a personal desire to make her own life easier may make your life easier too.
That and much more on this episode of the Microsoft Research Podcast.
Salima Emershi, welcome to the podcast.
Thanks. Thanks for having me.
You work in a group called the Adaptive Systems and Interaction Group, and you're a principal researcher there.
Tell us in broad strokes what you do for a living and why you do it. What gets you up in the morning?
Sure. So I work at the intersection of human-computer interaction and artificial intelligence.
And so I create technologies and techniques to
help people both build and use AI systems. So that means, you know, I think a lot about developers
and data scientists and the tools that they can use to create our AI systems, but also the end
users who ultimately will use and interact with AI systems in their everyday lives. And so I think
about how we can make these systems and people interacting with them
more efficient and effective. And then more recently, I've started to think a lot more
about responsible AI and how we can help people interact with these things safely and responsibly
so that they can trust them and use them to help them in their everyday lives. And so, you know,
if I think about what wakes me up in the morning, it's this responsible AI stuff.
It's both exciting and terrifying.
You know, there's so much potential for AI to help people and society, but also a lot
of potential for harm.
So that gives me a lot of work to do.
Well, before we get specific, I want to get a little geeky from an academic point of view
and talk about methodology.
I don't talk about that much on the show. And I think it would be interesting to kind of take a quick look at
the tools that you use to equip ML software engineers with the tools they'll use.
So a lot of my work involves designing and building new interactive AI systems. So that
means I end up using both design tools and prototyping tools,
as well as, you know, development tools and machine learning tools to build AI models.
And I think, you know, to build these systems effectively, you really need to understand
some of like how these AI and machine learning systems work. You know, you need to understand
what knobs are available to give to people so that they can interact with them more effectively. So I think a lot of my work involves
actually like building and developing systems. And then in terms of methodologies, I use both
qualitative and quantitative methods. You know, I really like to understand the needs of people
before I start building things. That's really this, you know, user-centered design approach,
right? Where you really like all your decisions are based on user needs. And oftentimes for that,
you use a lot of qualitative methods, interviewing techniques and surveys to get that sort of rich
feedback. But at the same time, I come from a math background. So I really like to see numbers
and hard evidence. So I also do a lot of quantitative research, so controlled studies where we statistically compare things so I can trust my own work and understand whether or not the things I build actually help people.
I think there are benefits and limitations to all these methods.
You know, if you use quantitative techniques,
you really have to control a lot of variables.
And that means you can really only answer
very narrow questions, right?
So yes, you can get statistical significance and numbers,
but you don't really understand qualitatively
like why this is happening
and why is this working better for people or not?
And I think I really like to use
both methods in all the work I do and like to have a quantitative and a qualitative perspective
because they really feed into each other. And that's how you can really understand
why things are working or not. Let's get into your work right now. There's a lot of discussion
today about ethics in AI. I think it's because we're starting to see some of the ramifications of these systems that we're putting out in the real world.
And Microsoft has actually been a leader in this space.
So I want to talk about several threads of research you've undertaken and the implications of your findings.
And I want to start by setting the stage and operationalizing the term responsible AI or RAI.
What is RAI and what does it look like IRL in the real world?
I'm a really practical person. So to me, responsible AI is really all about the how,
right? Like how do we design, develop and deploy these systems that are fair, reliable, safe and
trustworthy and all those things. And to do this,
I really believe that we need to think of responsible AI as a set of socio-technical
problems. Okay, so that means that we need to go beyond just data and models and making those
better. We have to think about the people who are ultimately going to be interacting with these
systems.
Even if you collect a really huge diverse data set and your models are tuned appropriately,
if a person can't effectively understand the AI or take over control when it inevitably fails,
that can also cause problems.
So I think when we create responsible AI systems, we need to think about these systems responsibly, which opens up many challenges, but also new opportunities.
Right. I like to say that failure has a lot of faces and not all of them are ugly.
You take it a step further and say that responsible AI requires planning to fail. Why? Yeah, so this is something I've been thinking about a lot lately.
And some types of failures are really inherent and unavoidable in AI. So yes, we should be doing
our due diligence and trying to make sure we deploy systems that are, you know, as error-free
as possible, that we've debugged them carefully.
We need to do all that work, but we also need to recognize that we can never get rid of all
of these errors. And that's by design. So let me give you an example. So imagine, you know,
you have a facial recognition system that's used for access control.
By design, an algorithm will be tuned to optimize for some metric.
So maybe you try to optimize for precision or recall, which really like affects the amount of false positives and false negative errors you can have.
You can never really get rid of all of those errors because by definition, an AI model is
a simplification of the world. You can never fully capture the world. So AI algorithms are designed
to do the best under the circumstances. And that means it'll sacrifice parts of the input space to
try to get something that's optimized accordingly. And so that means you will definitely have some
false positives and false
negatives. The algorithm will try to determine a model that generalizes well to new data and may
sacrifice parts of the input space. And so the choice of parameters you use or thresholds you use
is really important. And you really need to think about the user scenario there. So if you get that
choice wrong, that's going to be costly to the users.
So in this facial recognition scenario,
false positives are much worse than false negatives, right?
If somebody accesses your account,
that can be much more costly
than if you have trouble accessing it yourself.
So if you don't get that right, that's a failure.
In the same sense,
you can't avoid all false positives and false negatives. So you need to ensure that you give people mechanisms to not only understand when those are happening, but also to override the system or take over control when they inevitably happen. provide another means of accessing your system that doesn't rely on that technology.
So anticipating those failures as designers and developers, if you recognize these common AI failures and make sure you design interfaces and your systems to help people address those failures, that's how we can work towards creating responsible AI systems.
Go back to the planning for failure then in the design part of that.
What does that mean?
So that's about, I would say, enumerating the common types of AI failures.
So false positives, false negatives, being uncertain, being only partially correct.
But also, again, thinking of AI systems as socio-technical problems, right? So that means going beyond just the model errors themselves,
but places where the system can fail in terms of how the user is able to interact with them.
So again, like the mismatched expectations issue, right? I would consider that a failure. If a
person has higher expectations of the system than it's capable of,
that's a failure, right? Because if a doctor is relying on a system to make clinical recommendations
about their patients, if they think that the system is smarter than it actually is,
they may over-rely on it and that can result in harms.
So what's the mitigation there? Because you've got the system that's going to fail. Is it more educating up front? This is where we can get creative as an industry.
For some of these, we often go to, let's just give people all the documentation, right? Like,
list out everything, but nobody reads documentation, right? I'm just going to say,
I don't even read the apps. Exactly, right? And so what are other ways that we can sort of expose
the capabilities and limitations of these systems? We've done a lot of work in this space and try and understand what is effective for this and other types of failures. So showing examples of the variety of things that an AI system can do effectively is a way of giving people an understanding of their capabilities, giving people controls to turn the knobs themselves.
That gives them an understanding, but it also makes them part of the process. And so they're
sort of more accepting when these things inevitably break down and are more willing to
interact with them and continue to interact with them because they're part of that process.
Let's zoom in and talk about responsible AI writ large and kind of following up on some of the points
that you just made about planning for failure
and understanding that these things fail. I know that there are some myths out there about it
and some hurdles that people have to overcome, both on the making side and the using side.
So talk sort of generally about what we're facing here and what we need to do to build
responsible AOL. What are the necessary building
blocks? Yeah, this is a great question and it's something we've really started to look into
recently. We've started to do some preliminary work to understand sort of the challenges
people face in trying to build responsible AI systems and also the perceptions that people have
about responsible AI issues around like bias and fairness and, you know, some of the things that we hear about in the news.
And some of what we've been finding is really interesting.
There's AIs now that are being used to make hiring decisions or recommendations.
So imagine you're using an AI system to make recommendations about who to hire for a technology job or an engineering job.
You know, we've seen in the news, sometimes these systems can be biased. And so maybe your system
is biased against recommending women for engineering positions. If you ask people about
this fairness and bias issue, a lot of people will come and say, you know, well, this is just reality.
If we try to fix this, we're actually just adding our own biases and who are we to change reality? And this goes back to, you know,
what I was saying earlier, which is that this is not reality, right? Like AI models are by definition
a simplification of the world. Like we cannot represent the world and all the factors that
will impact whether or not a person will be a good hire.
We can't represent all of that.
And so while it might be true that there's, you know, gender disparity and technology,
it's incorrect to say that this is reality.
Another thing that we hear is that, you know, this is just math, right?
AI is math, so it can't be wrong, you know? And yes, it's true, like the math is designed to do what you tell it to do. But again, going back to what I was saying earlier,
the AI is designed to do the best under the circumstances. And because you can't fully
capture the real world, that means that it will try to, you know, minimize errors, not eliminate them, but minimize them.
Right. And so even if the math is doing what it's supposed to, you know, it's operating over data
and that data can be limited. How you show that information to the user can cause problems and
failures. And so the math might not be wrong, but the AI system overall could still be wrong.
I think we should think about these humans and machines as having different error regions, right?
So, yes, each of these systems will have uncertainty, but they'll be different.
And so ideally, those uncertainty regions don't overlap that much, right?
And then you can complement each other effectively.
And it's true because AI systems can see things that people can't, right?
And vice versa.
And so I think that's the way we should be looking at these and, you know, thinking about what is that overlapping region and ensuring that people understand sort of the limitations of those systems and where it might fail versus where you might fail.
What are we facing on the making end of this? We're talking here about how these systems operate with users.
Are there any challenges that we face as developers?
I think this is where a lot of the work needs to be done.
We have to, I think, rethink how we go about building these
systems. I'm a firm believer that building responsible AI systems requires an interdisciplinary
approach. And so, for example, a lot of times we put a lot of our emphasis and resources towards
building better models and getting better data. But again, you know, these are socio-technical systems.
So we also have to think about how those systems will be deployed in the world
and who it's going to affect.
Who are the different stakeholders?
What are the implications to those different stakeholders when things go wrong?
And I think that really requires a user-centered approach.
And so we should be leveraging, you know,
the skills and expertise of, for example,
our user researchers and designers
who have the training to sort of understand
the needs of people.
And that understanding should be driving
all of our AI decisions,
including, you know, what algorithms to use.
If you need a system that can provide an explanation
to a user so that they can
make an appropriate decision, you're going to need an interpretable model. So that's going to affect
the choice of algorithm you use. Understanding the users is going to affect the parameters you
choose, the data you collect, right? All of those decisions should be driven by the scenario. And I
think sometimes we do it the reverse way in the industry, right? We build
technologies and then sort of stick an interface on it and hope it works for people. But I think
to do it responsibly, we need to do the reverse. So there's a level of opacity, I think, in terms
of form factor when we deliver an AI system. When I think of what I get now, it's a laptop, it's a phone, but inside of that
is where these AI systems are being deployed. And so I don't necessarily differentiate between
my phone that used to do what it did and my phone that now I can talk to,
or my phone that looks at my face and says, okay, Gretchen, you can come in. How do you think about design on those things that alert users that they're now dealing with AI and how do you educate about that?
AI systems are fundamentally different than our traditional computing systems.
And I think our practitioners and product teams are sort of really struggling to design effective systems because of those differences.
What we know about how to design effective traditional computing systems, like making
sure your interfaces are consistent or your systems behave consistently so people know how
to use them, know what to expect. That's something that's inherently very difficult for AI systems because
they can operate differently in subtly different contexts or from one user to the next. How do we
design for that? I think we need to come up with a lot more guidance to help our teams understand
what is the effective way of designing these systems so people can interact with them effectively.
Bill Buxton, design guru here at Microsoft Research, was on the podcast,
and he talked a lot about the need for great design from the get-go
and spoke about who gets to make decisions about how something is designed.
And we're not just talking about the form factor, we're talking about the entire package. And so with these new AI systems, how can we bring
new people to the table that really need to speak to the design upfront into that traditional system?
Yeah, this is something I think about a lot, and it requires really a cultural shift, right? It's about recognizing and understanding
the skills and expertise that each of these different disciplines brings to the table
and how they can complement each other in order to create responsible systems.
That is just something that requires education. It requires trying it out, right? Like, hey,
if you do bring people in earlier, you'll likely create a better system because now your decisions
are driven by what's actually going to be useful for people. I'm actually really hopeful right now.
I joined MSR about seven years ago in the machine learning group. And I think things were much different back then, right? You know, it was much harder for me to explain like what I brought to the table
for machine learning, you know, and why it was even there, you know, I was the first like HCI
hire in the machine learning group. And it took me a while before people really understood what
the complimentary skills that I brought to the table that helped, you know, make these things effective. And I think
people are more open to that now. So even though it requires this shift in our industry, I'm hopeful
that it will happen. Right. Well, and Bill talked quite a bit about the fact that the arguments that
you hear are, we don't have enough time, we don't have enough money. We need to ship. But there are constraints that are inherent in that cycle. There are ways to plan for this, right? Like reserving some
resources for dealing with responsible AI issues. You know, if you know they're going to be there,
so if you reserve some resources for that, then you don't run into this problem of we don't have
enough time or resources. I think another thing that helps is
calling these issues failures. I use the word failures really intentionally because
we used to call responsible AI issues, you know, issues or problems, but just that term will
put it lower in the priority stack, right? But if you call it a failure, like if people might get harmed
or if there's a bias against people, like that's a failure. That's something that's a showstopper,
right? You're not going to deploy something that has a failure. So I think it's important to talk
about it that way so that we ensure that we're actually prioritizing fixing them.
AI brings both new challenges and new opportunities and innovation in user interface and user experience.
You address this issue in a paper that proposes guidelines for human AI interaction from a research perspective, a development perspective, and a user perspective, which I think is cool.
Talk about the genesis of this work and the findings that you presented in a paper at CHI this year?
So we created the guidelines because we were really seeing that our product teams and practitioners were struggling to design for AI. And I spoke about this a bit earlier, which is,
you know, this fact that AI systems are fundamentally different from traditional
computing systems. So they're going to be inconsistent. They're going to be error prone.
So what we know about designing for traditional systems doesn't always work. This is evidenced by
just the failures that we see every day in the news, you know, that range from like funny,
like auto-completion errors to like really harmful errors. And at the same time, there's actually been a lot of research advances over the last 20 years around how to develop these types of AI systems effectively.
And for me, that was somewhat frustrating, seeing sort of people struggle and missing out on some of what's been going on in the academic community or industrial circles.
And I felt that there was a lot of sort of reinventing the wheel and wasted time. And that's, you know, partly because guidance is really
scattered across many different industrial circles. Guidance that is available hasn't
always been evaluated in a wide variety of scenarios. So if you know something works well
for like a bot, how do you know it's going to work well for any other AI system?
And sometimes guidance is presented at very different altitudes.
Like you can have really high level guidance, like make sure these things are, you know, trustworthy.
But like, how do you do that? Right.
What does that even mean?
Exactly. Right. Like, so we wanted to provide guidance that was more actionable.
Right. So we got together with a large group of people and said, hey, let's just try to synthesize what we know across the industry and come up with guidance that's clear, actionable, and that we best practice recommendations from a wide variety of
sources. I think we had like 200 to begin with. And then we iteratively grouped them, revised them,
tested them with real practitioners to understand if they were really something that people could
detect and notice in interfaces. And that's how we developed them. We tried to take a really
rigorous and systematic approach so that we could, you know, feel confident in recommending these as things that we know are tried and true.
And I think that helps both, you know, researchers, developers, and end users. You know,
it helps practitioners design better systems. That gives the end users better systems to use.
And I think it also helps accelerate research because, like I said,
you know, I felt that we were sort of reinventing the wheel. And I think by synthesizing this work,
it can kind of reveal where the real gaps in our knowledge are. And so we can try to
then really push the boundaries into what we don't know.
Who's this for? I mean, when you say guidelines for human-AI interaction,
who's this for? I mean, when you say guidelines for human-AI interaction, who's your
audience? I would say primarily practitioners, so product teams. Like, we want them to, you know,
have the knowledge to create good systems, but also researchers. Like I said, you know, I want
to really advance the field. Here's what we know now. Let's figure out and improve the situation and do research in
areas that we have less knowledge. You currently chair a really interesting working group at
Microsoft on human AI interaction and collaboration. And it's a part of Microsoft's
Ether effort, which is a company-wide initiative around responsible AI. And I know
there's a lot of Venn diagram overlap with this and what we've been talking about up to this point,
but I want you to drill in a little bit on why the issue of human-AI collaboration warrants an
actual group or task force, maybe is a good word, and who's involved. Like you said, you know,
much of what I've talked about today actually came out of this working? Like you said, you know, much of what I've talked about today
actually came out of this working group. So that, you know, the studies about people's perceptions
around responsibility, I failures, the guidelines work, all came out of this human interaction and
collaboration working group, which we call the Hake working group. And, you know, just like,
you know, any new set of problems, this is a new space, right? Like we don't know how to do human AI interaction and collaboration well yet. And so we need people to be really trying to push the boundaries of this area, you know, people with the right expertise in order to make advances. So that includes, you know, coming up with new best practices and techniques,
but also even just advancing the state of the art in terms of the methodologies we use.
That's something that we've been looking into recently is that a lot of our techniques and
methodologies for building traditional systems don't work that well. So, you know, we have in
design, we have prototyping techniques like Wizard of Oz techniques,
for example, that's often used to do early prototyping and testing. But that's really
hard with AI systems because it's hard to mimic the different ways an AI system will behave.
But that means if you don't have that in place, then you need to take a dependency on a model, which means that you can't sort of test your interface early.
And that causes problems. about new methodologies to enable rapid prototyping and iteration and other methodologies for
evaluation testing and building AI systems. Let me drill in a little bit there. Technology
advances at a very rapid pace now, maybe faster than it ever has. And you're a group who are a
little bit putting the brakes on and saying, wait, we need to make sure this isn't going to harm anybody.
How much influence do you have among the various groups of people that are putting this technology out?
So I would characterize it not necessarily that we're putting the brakes on things.
Yes, we're trying to make sure we think
carefully and thoughtfully about how to build these systems. So in that sense, we might need
to slow things down. But we're also trying to push the boundaries, right? That means like coming up
with new techniques so we can then accelerate our progress. What are the new methodologies and
techniques and tools we need to build so that we can still make
rapid progress but do so carefully so i kind of see us as you know partially pressing the
but partially you know pressing the accelerator Well, Salima, we've reached the part of the podcast where I ask all my guests the same
thing, what keeps you up at night? And obviously, a lot of your work is based on what keeps all of
us up at night. I'm glad you're doing it. Ethical AI is a huge issue here
and you're tackling it head on.
But that said,
not everyone's going to comply
with best practices.
So what kinds of things can be done
to mitigate the undesirable consequences
of these powerful tools
and ensure that I don't lose
any sleep at night?
Yeah, you know, I mean,
the space is, like I said earlier,
it's both exciting and terrifying for me, you know, and I really believe that to create ethical and responsible systems, we really need a diverse set of perspectives at the table.
Right. This is both at the macro level in terms of coming up with policies, like I think we need policies around this, but that needs to be driven both by industries and government agencies working
together, right? If you have just one of those entities making decisions, sometimes you'll come
up with things that just don't work. And then at the micro level, you know, building individual
products, I really feel that we need people with not only diverse backgrounds in terms of, you know, race and ethnicity and gender, but also different skill sets.
You know, people with different experiences, different tools and methodologies that you use.
How do we enable these people to work together?
And I think that's going to require a cultural shift, which is hard to do.
But, you know, I'm trying my best at the, you know, the Ether working group on HAKE, Human-AI Interaction and Collaboration.
This is sort of what we're really trying to do.
Microsoft Research is not a monolith.
People come from all over here, different backgrounds and life experiences and unique personal stories.
So tell us yours, Salima.
What got you started in computer science,
and what landed you here at Microsoft Research?
I did my PhD at the University of Washington,
which is really just across the lake from Redmond here.
And so that made it really easy to collaborate with MSR.
And there was just so many interesting people to work with.
So I ended up doing three internships here at MSR.
I just kept coming back.
And even like when I wasn't doing internships, I would collaborate with people.
I just always loved it.
I love the energy, the breadth of experience and expertise.
And everyone is just willing to talk to you and work together.
And I sort of always knew that I wanted to come back here.
And so after grad school, you know, I applied,
came here when I first started, you know,
people would ask me if I was, you know,
back for another internship.
No, I'm here.
You again?
I'm really working here now.
I'm hired.
Well, rewind before University of Washington and your PhD.
What got a young Salima Emershi interested? Where did you start?
Are you from Washington State, etc.? So I grew up in Vancouver, BC. Yep.
I didn't know that. Yeah, so I went to UBC for undergrad and my master's. And I actually started
off as a math major. In fact, I never thought I would go into computer science. You know, it wasn't like computers were just sort of becoming common in high schools and I wasn't
really exposed to it that much, but I really liked math. That's what I wanted to do. I wanted to be
a math professor. And so when I started undergrad at the time, you know, you had to take computer
science courses as part of your math major. And that's, I think, when I got really exposed to
computer science, which is really about putting math to work. that's, I think, when I got really exposed to computer science,
which is really about putting math to work.
You know, it's about making math do things and do cool things, you know.
And that's when I started transitioning to computer science.
And then I started working at the Laboratory for Computational Intelligence at UBC,
working on intelligent tutoring systems, right?
That's where I got exposed to AI systems.
And then as I was building those systems, I found it was hard to do, you know? So I was trying to
make my life easier, you know, by building better tools. And that's kind of what led me to HCI.
And here I am working now on this intersection.
What's one interesting thing, a trait, a characteristic,
a life event, a side quest, that people might not know about you? And maybe it's impacted your
career as a researcher. I think of myself as a pragmatist. And most of what's driven my work,
I would say, and driven the path that I took, was trying to make my own life easier. You know,
so when I was working with intelligent tutoring systems back at UBC, I remember at that time,
to build these things, you would create like these giant belief networks that were hand-tuned by
experts for just one system. And that was like, yes, it was powerful, but it was not scalable.
And that's what got me into machine learning, right?
It's like, okay, how do we do this without having to hand tune all these things?
So that's how I started using machine learning for intelligent tutoring systems.
Then when I was using machine learning, like I said, you know, the tools that we had for those,
for, you know, data collection and cleaning and understanding and debugging, were just really hard to use.
And it was really difficult.
There was a steep learning curve.
So that's what got me into interactive machine learning and HCI, you know, because I wanted to create better tools for myself, you know, so I can build these things more efficiently
and effectively.
And then, you know, at the same time, I'm not, you know, I build these systems, but
I'm also a user, right?
So when I interact with these AI systems in my everyday life, you know, like social networking systems or recommender systems, it really frustrates me when I can't do what I want.
You know, I can't steer these things the way I want.
And I think that's about giving everyday people the right controls and knobs in order to steer these things.
I think there's a myth that people won't want to put in
the time and effort to interact with these or steer their systems. But I don't think that's
true. You know, I think if people start to understand the benefits of doing so, and if you
give them easy controls, they'd be willing to do so. And so really, it's about helping myself,
making my life easier, which in turn will help other people build and use these things effectively.
At the end of every show, I give my guests a chance to encourage, inspire, or even instruct our listeners pretty well in any way they see fit.
Do you have any parting words, any thoughts on what we might need from next-gen researchers for next-gen technologies?
Yeah, I really believe that there's so many interesting opportunities to work at the
intersection of different fields. There's a lot of opportunities to bridge different communities,
enable them to work together more effectively, to create really novel solutions to our problems. So, you know, what I would
recommend for, you know, the students out there, the next generation of researchers is to explore
those opportunities and work across interdisciplinary boundaries. You know, train yourself in multiple
different fields so you can understand problems that might be solved by bringing those together.
I think that could
really help change the world. Salima Amershi, thank you so much for joining us today.
Thank you for having me.
To learn more about Dr. Salima Amershi and how researchers are working to make AI robust
and responsible,
visit microsoft.com slash research.