Microsoft Research Podcast - Abstracts: Societal AI with Xing Xie
Episode Date: May 5, 2025New AI models aren’t just changing the world of research; they’re also poised to impact society. Xing Xie talks about Societal AI, a white paper that explores the changing landscape with an eye to... future research and improved communication across disciplines.Read the paper
Transcript
Discussion (0)
Welcome to Abstracts, a Microsoft Research podcast that puts the spotlight on world-class
research in brief.
I'm Gretchen Husinga.
In this series, members of the research community at Microsoft give us a quick snapshot or a
podcast abstract of their new and noteworthy papers.
I'm here today with Xingxie, a partner research manager at Microsoft Research and co-author
of a white paper called Societal AI, Research Challenges and Opportunities.
This white paper is a result of a series of global conversations and collaborations on
how AI systems interact with and impact human
societies. Xingjie, great to have you back on the podcast. Welcome to Abstracts.
Thank you for having me.
So let's start with a brief overview of the background for this white paper on societal
AI. In just a few sentences, tell us how the idea came about and what key principles drove the work.
The idea for this white paper emerged in response to the shift of our witness in the AI landscape,
particularly since the release of CHEGPD in late 2022. These models didn't just change the pace
of AI research, they began reshaping our society, education, economy, and yeah,
even the way we understand ourselves.
At the Microsoft Research Asia,
we felt a strong urgency of better understanding these changes.
Over the past 30 months,
we have been actively exploring this frontier
in partnership with experts from psychology, sociology,
law, and philosophy.
This white paper serves three main purposes.
First, to document what we have learned.
Second, to guide future research directions.
And last, to open up an effective communication channel
with collaborators across different disciplines.
Research on responsible AI is a relatively new discipline,
and it's profoundly multidisciplinary.
So tell us about the work that you drew on as you convened this series of workshops and summer
schools, research collaborations and interdisciplinary dialogues. What kinds of people did you bring to
the table and for what reason? Yeah, responsible AI actually has been evolving with Z Maxort for like about a decade.
But with the rise of large-language models, the scope and urgency of these challenges have grown exponentially.
That's why we have learned heavily on interdisciplinary collaboration.
For instance, in the Value Compass project, we worked with philosophers to frame human values in a scientifically actionable way, something
essential for aligning AI behavior. In our AI evaluation efforts, we drew from psychometrics
to create more principled ways of assessing these systems. And with sociologists, we have
examined how AI affects education and social systems. This joint effort has been central
to the work we share in this white paper.
So white papers differ from typical research papers
in that they don't rely
on a particular research methodology per se,
but you did set as a backdrop for your work,
10 questions for consideration.
So how did you decide on these questions
and how or by what
means did you attempt to answer them?
Rather than follow a traditional research methodology, we built this white paper around
10 fundamental, foundational research questions. This came from extensive dialogue, not only
with social scientists, but also computer scientists working at the technical front of AI.
This question spans both directions.
First, how AI impacts society?
And second, how social science can help solve technical challenges
like alignment and safety.
They reflect a dynamic agenda that we helped to involve continuously through real world engagement and deeper collaboration.
Can you elaborate on a little bit more on the questions that you chose to investigate as a group or groups in this?
Sure. I think I can use the value compass project as one example.
In that project, our main goal is to try to study how we can better align the value of AI models with our human values.
Here one fundamental question is how we define our own human values.
There actually is a lot of debate and discussions on this.
Fortunately, we see in philosophy and sociology actually they have studied there for years, like for like hundreds of years.
They have defined some like, so as basic human value framework, they have defined like modern
foundation theory, we can borrow those expertise. Actually, we have worked with sociology and
the philosophers, try to borrow this expertise and define a framework that could be usable for AI. Actually, we have worked on developing
some initial framework and evaluation methods for this.
One thing that you just said was to frame philosophical issues in a scientifically actionable way.
How hard was that?
Yeah, it is actually not easy.
I think the first of all, social scientists and AI researchers,
we usually speak different languages.
Our research at a very different pace.
So at the very beginning, I think
we should find out what's the best way to talk to each other.
So we have workshops.
We have joint research projects.
We have them visit us.
And also, we have joint research projects, we have them visit us, and also
we have supervised some joint interns.
So that's all the way try to find some common ground to work together.
More specifically for this value framework, we have tried to understand what's the latest
program from their source and also try how to adapt them to an AI context.
So that's, I mean, it's not easy, but it's
like enjoyable and exciting journey.
Yeah, yeah, yeah. And I want to push in on one other question that I thought was really
interesting, which you asked, which was how can we ensure AI systems are safe, reliable,
controllable, especially as they become more autonomous? I think this is a big question
for a lot of people.
What kind of framework did you use to look at that?
Yeah, there are many different aspects.
I think alignment definitely is an aspect.
That means how we can make sure we can have a way
to truly and deeply embed our value into the AI model.
Even after we define our value, we still need a way to
make sure that it's actually embedded in.
And also, evaluation is another topic.
Even we have this AI looks safe and looks behavioral good, but how we can evaluate that,
how we can make sure it is actually doing the right thing.
So we also have some collaboration with psychometrics people to define a more
scientific evaluation framework for this purpose as well.
Yeah, I remember talking to you about your psychometrics in the previous podcast you
were on and that was fascinating to me. And I hope to at some point, I would love to have
a bigger conversation on where you are now with that because I know it's an evolving
field.
It's evolving. Yeah, amazing.
Well, let's get back to this paper.
White papers aren't designed to produce traditional research findings, as it were, but there's
still many important outcomes.
So what would you say the most important takeaways or contributions of this paper are?
Yeah, the key takeaway, I believe, is AI is no longer just a technical tool.
It's becoming a social actor.
So it must be studied as a dynamic evolving system that intersects with human values,
cognition, culture, and governance.
So we argue that interdisciplinary collaboration is no longer optional.
It's essential.
Social sciences offer tools to understand the complexity, bios and trust, concepts that are critical
for a safe and equitable deployment. So the synergy between technical and social perspective
is what will help us move from reactive fixes to proactive design.
Let's talk a little bit about the impact that a paper like this can have.
And it's more of a thought leadership piece.
But who would you say will benefit most from the work that you've done in this white paper
and why?
We hope this work speaks to both AI and social science communities.
For AI researchers, this white paper provides frameworks and real-world examples like value
evaluation systems and cross-cultural model training that can inspire new directions.
And for social scientists, it opens doors to new tools and collaborative methods for
studying human behavior, cognition, and institutions.
And beyond academia, we believe policymakers and industry leaders can also benefit as the
paper outlines practical governance questions and highlights emerging risks that demand
timely attention.
Finally, Xing, what would you say the outstanding challenges are for societal AI as you framed
it?
And how does this paper lay a foundation for future research agendas?
Specifically, what kinds of research agendas might you see coming out of this foundational
paper?
We believe this white paper is not a conclusion, it's a starting point. While the 10 research
questions are strong foundations, they also expose deeper challenges. For example, how do we build a truly interdisciplinary field?
How can we reconcile the different timelines,
methods, and cultures of AI and social science?
And how do we nurture talents who can work frontally
across those both domains?
We hope this white paper encourages others
to take on these questions with us.
Whether you are a researcher, student, a policymaker or technologist,
there is a role for you in shaping AI that not only works, but works for society.
So yeah, I look forward to the conversation with everyone.
Well, Xingxie, it's always fun to talk to you. Thanks for joining us today.
And to our listeners, thanks for tuning in. If you want to read this white paper, and I highly recommend that you do, you can find a link at
aka.ms forward slash abstracts, or you can find a link in our show notes that will take you to the you