Microsoft Research Podcast - Ideas: Steering AI toward the work future we want
Episode Date: April 9, 2026Microsoft Chief Scientist Jaime Teevan and researchers Jenna Butler, Jake Hofman, and Rebecca Janssen unpack the New Future of Work Report 2025 and explore the ideal AI-driven working world. Plus, is ...AI a tool or a collaborator? And why the answer matters.Show notes
Transcript
Discussion (0)
It's really what we've been living through. It's not that like every year work is changing in a generational manner.
It's much more that we are in the middle of a really big shift in sort of how digital technology can support people getting things done.
It's not predetermined. The future of work is actively being built by us, by consumers. I love that.
It's easy for us to say, let's get everyone to adopt and let's boost efficiency. Let's make everything really quick, right?
but I don't think that that's actually the future like we want to live in.
We keep benchmarking against the past.
So what can I do?
Or can I do what we already do?
And I think this is like a mistake.
Or maybe only the first step and the more important step comes next.
You're listening to Ideas, a Microsoft Research podcast that dives deep into the world of technology research and the profound questions behind the code.
Hi.
Hi, I'm Jamie T-Ban, Chief Scientist and Technical Fellow at Microsoft, and today we're going to talk about the new future of work.
So back in 2020, researchers from across Microsoft came together to try to make sense of this seismic shift in work practices that was happening as a result of the pandemic.
And the next year, the group published the very first New Future of Work report.
Microsoft has been publishing a new report every year since with no shortages of disruptions and major technological shifts in between.
Joining me today to explore the latest report are my colleagues Jenna Butler, Jake Hoffman, and Rebecca Jansen, who are a few of the many authors on the report.
Jenna, Jake, Rebecca, welcome to the podcast.
Thanks, Jamie.
Thanks, thank you.
there are a lot of factors that shape the work people do and how they do it from social factors to economic factors to technological factors.
And, you know, as we've learned from the previous reports that we've written together, accounting for this complexity requires a lot of different background, knowledge bases, approaches, and research methodologies.
So before we get into the specifics of the report, I'd love it if each of you could share a little bit about the experience and expertise that you bring to the contributions you made to the report and why the work you do matters.
Jenna, why don't you get us started?
Sure, yeah, thank you, Jamie.
So I've been on the report since it started in 2020, and I'm really proud of the work that we do.
I think it matters for a number of reasons.
But most importantly, I think, especially right now, people feel like technology is sort of happening to them and these changes are happening to them.
And actually, with any technology we introduce to society, that's a sociotechnical shift.
And so how people perceive it, use it, what they want to do with it, what they're willing to pay for, all these things matter.
And so the report, I think, gives some agency to people to let them know, like, what's happening right now, what's the latest research?
and also how are your own behaviors and views shaping the technology?
And when it comes to expertise, I study software engineering productivity, and right now very
specifically how AI impacts or changes that.
But my background is actually originally in bioinformatics, studying cancer.
And I've always loved multidisciplinary fields because I feel like with the type of problems
we have in today's world, the solutions often lie at the interface of multiple
disciplines. And so this report with over, you know, 50 different authors from all over the world,
I think is a really fun example of just how much great stuff you can get when you bring different
people like that together. Thanks, Jenna. How about you, Jake? Yeah, so I've been involved with
the report since 2023, so less time than Jenna. But as an author originally on bits related to
AI and cognition, which is a core research topic for our Microsoft research. And
New York City lab.
And more recently, I've co-led a work stream across the whole company called Thinking and Learning
with AI or Tala, for short, with Richard Banks, another researcher.
And so Jenna and Rebecca and company, who really drive and lead the report, were kind enough
to invite me to be a section editor this year.
And I gladly accepted because I know how widely read and impactful the report is.
And I think it's just a wonderful opportunity to showcase research, not only from Microsoft,
but from all around from a coherent viewpoint and voice.
Thanks, Jake.
And Rebecca.
Yeah, and we were really glad to have you joining as a section editor,
Jake, just to say that.
Yes, I joined Microsoft full-time in October 2024,
so kind of like the new joiner among the three of us.
And already during my PhD, I was interested in, like, AI and its impacts on work
and society, in particular from the economics perspective.
So I was always really excited about the group's work and was, yeah, just like really looking forward to leaning in not only on the economics perspective and those sections, but also like more broadly with like editing the report overall.
And to the point of like why it matters, I think what is so exciting about the report is the variety of like different people, different backgrounds and different topics.
And there's like so much you can talk about, speak about, but also realize, oh, AI is impacting work, but also like so many different or other parts of life.
Rebecca, I love your story too about how you had been reading the report from outside of Microsoft and then got to come in to engage.
I know there were a number of people involved this year who said that it kind of was cool like to feel it becomes something of an institution.
Yeah, exactly.
Yeah, super cool. But for listeners who are new to the new Future of Work Report, can you share a little about what it is, who it's for, what people can use it for?
Yeah, I can take that one. So obviously, I'm biased. I think it's for everyone, but perhaps it's not. But the idea is to sort of showcase the research that's been happening over the last year. So we release it annually, usually in December, on these big,
shifts that have been happening. And so the last couple years, AI has been a big part of it.
And the idea is to take research not just from Microsoft, but from external places as well,
all around the world, and try and sort of sum it up in small statements that we can back up with
research. And we are very careful to make sure we're only doing this in areas where we have a
researcher and we can make a pretty bold claim and where we feel confident in the data and that
it backs up what we're saying. And so if you just want to read one, albeit it,
somewhat long report, you'll get an idea of what's happening in the world of AI and work fairly broadly.
So from the economy to adoption, to thinking and learning, to specific industries, and what leading
experts outside the company are thinking and predicting as well. So it should be broadly accessible
to any sort of academic audience. You don't need to be an AI expert to read it. And hopefully it'll
help with all different areas. You know, one of the things that jumped out to me, Jenna,
sort of reflecting on the past five years. This is our fifth report. So on the past, over the ones we've
done is every time when we go to release it, it's like, oh my gosh, work has changed. It will never be
the same again. I was actually like reading the past introductions to the report in 2021.
During, you know, thinking about the pandemic, I was like, work will never be again the same.
In 2022, as we were shifting to hybrid work, I've said work is changing faster.
than it has in a generation.
2023, where we've been living through not one, but two generational shifts in how we work.
And then, you know, more recently, obviously, we've been talking a lot about the transformative impact on AI and productivity.
And, like, one thing that was fun about doing this report was sort of looking at these what felt like different shifts over time.
and like being able to see the through threats and the connections.
Because really what we've been living through,
it's not that like every year work is changing in a generational manner.
It's much more that we are in the middle of a really big shift
and sort of how digital technology can support people getting things done.
And I'd be curious about what changes in attitudes
and understanding of AI and work you all.
have witnessed in these past five years across industry and academia,
and even on an individual level, like, how it's changed for you personally?
I can kick us off with that maybe.
I think it's pretty amazing, like, in the last three years to think about just how much
in the research world has changed on generative AI and work.
You know, like I remember, like January, 2023, you know, people were just off to the races
is everyone was doing everything they could to just evaluate a model in isolation,
because that's what people had access to.
But there was very little in terms of humans in the loop and people evaluating what happens
when it's not just a model taking a standardized test or a benchmark.
And so that was something that we immediately focused on because it really hit our expertise in the lab here.
And, you know, there were others, but it was still kind of limited in terms of who had access to the models
and who had the capability to like design and run experiments that involved, you know, real people, right?
And even then, it was kind of limited to laboratory experiments, right?
And now, you know, fast forward three years, and we have pretty much everyone has access to any model they want to.
They have amazing tools to build and design experiments, and they can run them in the field, right?
And I think there's also been a shift from, okay, how much does this tool speed us up to what are the bigger, broader,
effects, which is all the exciting stuff, I think, for thinking and learning in particular,
that these tools have beyond just efficiency.
So I think it's just amazing.
Like, in no other time, have you seen this leap from, you know, a three-year period from, like,
a few people doing small lab studies to, like, lots of people doing field experiments with,
you know, wide-reaching implications?
Yeah, Rebecca or Jenna has, has, have you observed in your own work practices?
Sort of Jake is talking about how his research is changing.
Have you been observing?
things like that as well. Yeah, definitely. I would say it's just so interesting to see how these
tools can help you. I mean, I, when I started, or like, I finished my PhD kind of like throughout
this wave of like AI really picking up and just like even in the short time seeing, oh, where does
it help me? Where does it not help me that much? But also the stress, oh, where do I want to
stay involved. And I think that's still like an ongoing progress or process, at least for me,
to figure this out. And I think that's also what I hear from other people, that they're just like
experimenting a lot, playing around with this, and figuring out, okay, where does it actually
change things and change workflows on the broader level. Yeah, I think, Rebecca, to that point of,
like, where does it help me or where does it not? Something that has struck me over the last five years of the
report is how nuanced it is and how we anticipated certain things and it wasn't necessarily like that.
Like when we all went remote, we thought, oh, people will be lonely and there were studies looking
at this. And it was like, wait, some people are really thriving. What's thought about? And then hybrid
work, like, we don't all need to go back or we need to go back sometimes. And then with AI,
this incredible tool, everyone's going to benefit. And then we saw, oh, there's so many factors as
to who benefits and how they benefit and whether they believe it's going to be useful even impacts
it and what kind of tasks they're doing and what their problem solving style is. So I think the
uniqueness of all of this and how each worker is different and there was no single answer has been
really fun to see and watch as well.
Yeah, yeah, yeah, yeah. So I like this, thinking about the different ways that people, like
even just listening to the three of you and seeing the variation and the ways that you're thinking
about your work practices changing. Adoption clearly matters a lot, and I know that's something that
we center in the report. Jake, you talked about how everybody has access to models, but not everybody
is actually using the models, and we're certainly not using them in the same way. I was wondering
if you could tell us a little bit about what the report says about today's level of adoption,
and like who's using it and how.
So what we see in the research, and this is mainly based on like surveys being conducted in different countries, and then of course also some more like field experiment studies, what we see is that eye adoption is definitely increasing overall, but it's really heterogeneous and more nuanced in depth, like who is using it and also like for which purposes.
So a German survey found that about like 38% of the respondents were using AI for work.
But this is just like the average.
And we do see like lots of differences across like industries.
So there were other surveys where the results showed that IT and procurement,
where example, industries or like sectors which were more open to use AI than maybe marketing or operations.
there also has been some evidence on male or men being more open to use it than women.
I don't know how the gap looks like right now.
I hope this is like converging even more.
But this is maybe like on the high level like about AI adoption levels.
And for the question of like what people use this for,
there are no more studies also like using chat conversations to see,
oh, what are actually like the user intense and goals.
And we have a group also within Microsoft who has done something similar.
And they found that information retrieving but also communicating
has been or have been among the top user intents.
There's definitely a lot of writing related or there are a lot of writing related tasks
that are conducted with chat tools.
And I think that's like the big picture we see.
But maybe even there, I think it also depends a lot on which AI tool people are using.
So maybe Anthropage Word sometimes shows more heavier weight on like coding and developer use cases.
So there's definitely like some variety.
And Jake, I know you've done a lot of studying in the education context as well.
Can you share a little about that?
Yeah.
I mean, the report, I think, gives really definitive numbers in this regard in that recent
surveys show that like 80% of students, sorry, 80% of teachers and 90% of students report having
used, you know, generative AI for school work, you know, with use growing year over year,
right? What's interesting is that, you know, there are like myriad educational, like,
tools and specific versions of generative AI products and all these startups. And yet almost
all of the reporting shows that people are using the generic off-the-shelf,
co-pilot, chat GPT, Claude, Gemini, so on.
Not necessarily even in like a learn mode, right?
And so I think this speaks to like the bigger sort of policy and training gap
that's out there in terms of the fact that everyone is using these tools,
but there's not amazing guidance for how to use them constructively.
The good news there, I think, is that we've seen like big efforts this year,
So with the American Federation of Teachers in partnership with Microsoft and Open AI and Anthropic,
there's actually a big program to try to reskill teachers and give them the training to use this technology appropriately.
So I think there's a lot of hope there, but I think it's also really something we should keep our eye on in terms of making sure that we're using these tools in the right way.
Yeah, and one of the challenges is that the tools are changing so fast.
Like it's very hard to provide any guidance when it's going to be different tomorrow.
Yeah. I find that too. Like, people are always asking me. They're like, oh, what surprises you most about how people are using AI? And it's funny because almost as soon as something surprises me, like a week later, everybody's like, oh, that's obvious because things are changing so fast. So I'm going to turn that question on to all three of you. And I would like you each to answer this. I'm curious what you have found particularly surprising about how people in organizations are leveraging AI right now. Maybe Jenna, you want to.
kick us off? Sure. Yeah, I do a lot of studies looking at how organizational behavior is changing
with AI and something that is somewhat surprising, but I think might really surprise others,
is just how much influence individual people have on the adoption of these technologies. So lots of
studies have shown that how individuals talk about it with their colleagues will change whether
they're willing to use it or what tasks they use it for, and how leadership demonstrates and
discusses these tools will impact whether their people feel like they can use them.
And so while we did just give everyone like, hey, here's access to these absolutely incredible
tools.
As you said, Jamie, we didn't exactly have a guidebook for these people because they're changing
all the time.
And so a lot of the best use cases have just been figured out by people using them and sharing
that sort of from a ground up point of view.
And so I feel like it's been a technology where individuals have had a lot of opportunity
to help shape how it's used and how it's.
spread through an organization.
Yeah.
You know, I think it's not like the bottom up is super cool, as you mentioned, Jenna,
but also the fact that like how much experimentation people are doing and how creative people are getting with these tools, I think is just been itself really surprising to me.
I think, you know, it's sort of this thing that builds on itself because, you know, there used to be kind of a high barrier from translating an idea.
Like if you had some boring, repetitive thing that you did at work and you wanted to automate it,
right you probably needed to know how to code and needed to know how to do a bunch of obscure things to like make that real and then share it with other people right and now that barrier is much lower and so you see all the creative ideas and the democratization of that happening and then people sharing it really quickly and easily with their colleagues and then all of a sudden everyone is like oh did you hear what so-and-so did i'm going to start doing that right on the other hand i think it is a little bit terrifying just how fast the experiment
is going and sometimes how reckless people are right especially with some of the agentic stuff
where people give like all permissions to their agents and they let them go do all kinds of crazy things
and sometimes that leads to interesting outcomes and sometimes undesirable outcomes so i think it's
been exciting to see things change so fast but i hope we can find like a good balance of move fast and
hopefully not break things yeah definitely i agree on like their experimentation part there
I think for me it's what is especially surprising but also fascinating was the learning about the new ways of interacting with these tools.
So we talked a lot about like multimodal models.
So like, okay, you can generate text, you can generate videos, but also like the way of interacting with AI.
So throughout the report I learned also about some user research, we're just looking at like, oh, we are so used to using text-based.
artifacts, but maybe I want to emphasize something, or like something speaks to me in particular, and I find it important, so I double click on this.
And this way the tool then knows, oh, this is something I need to dive deeper into.
So just like these new ways of interacting with them with the tools, I think is something really, really encouraging, because it also speaks to the fact that individuals are really different and everyone has their own needs or preferences.
And some of the tools can help just like meeting the different preferences there.
So we've been talking a lot about adoption.
And I want to switch now a little bit to talk about the impact that AI is actually having on how people get things done.
And obviously, impact is heavily mediated by adoption.
Is there anything that we can say based on the adoption findings or anything else about what we actually know about the changes?
that AI is bringing about.
Yeah, I think we're seeing a lot of things.
So while on the one hand, there's still so much we don't know, we are able to observe a lot
as we go.
We do see that a lot of tasks are able to be impacted by AI.
And so when we think about it, we don't necessarily think about whole jobs, like how the
jobs are shifting as a single whole, but more like the tasks different people do are shifting
over time.
So specifically in the software engineering field, we're already seeing that.
software engineers are spending a lot more time interacting with code in ways that feel fun for them,
like the harder problems, they're getting to think more, they're getting to solve more problems
and do less boilerplate or boring work to them. But then we also see that that's driving some
burnout or some cognitive overload where they feel like I only ever am doing the exciting hard
problems and my brain never gets a break from that. So this shift in how each job is doing tasks
differently is something we're really observing.
And we see it a lot with white-collar workers and jobs that are involved information in
being on a computer.
They have a lot of tasks that are amenable to this technology.
I love the concern about only ever doing the hard, interesting, exciting problems,
because I totally feel it like it's real.
It's just funny, you know.
Yeah.
Yeah, I can maybe add to some of the adoptional, like, impact side, also like on the labor market
or like what we see in those areas.
the sections of the report. I think for the first part, it's, we do have more insights now
into like individual productivity effects. There have been like multiple studies, field experiment,
lab experiments, or different like occupations where some groups are using AI, others do not,
and how this impacts than their work. And what we usually see there is that people tend to be
faster completing tasks and also oftentimes lead to better outcomes or like complete
are able to provide better outcomes. That being said, there are also studies where this is not the case
or which raise these issues or issues about overreliance and that people also need to make sure like
to still be engaged and making sure oh is this actually a task that I can really help me with
or am I just relying on the AI tool too much there?
So there is some, like, jacked frontier of, like, what AI can do and cannot do,
and, like, how people, yeah, how they interact with that.
On the broader level, on the labor market side,
that's also something that we have emphasized in the report.
We do not see large impacts or effects overall,
based on some labor market studies that are looking at both,
employment rates, but also job postings and these kinds of things.
Maybe if you're looking at specific online labor platforms,
so it's just like the system or the ecosystem is a little bit different,
it might be different, but overall I would say the effects are still modest.
One subgroup where we have early insights now that they might be especially like impacted
is the group of like early career workers where maybe
AI can do some of their tasks more easily than for later stages in their careers.
But even there, I think we still need more time and evidence to say explicitly, oh, this is
because of AI and not just like microeconomic trends.
And when do you think we're going to be able to, you know, start seeing that impact?
Do you think it's because the impact isn't happening at that macro level, or do you think
it's just a kind of temporal thing.
I think it's probably both.
And I would also say, like, AI is technology,
but we are living in, like, systems,
and we are living or, like, working in organizations,
and organizations will adopt in one way or the other.
And I think we do need some more time,
but also, I think time for people and organizations
to really think about, oh, how do we want this to change our,
work settings.
That's great.
And actually, I think it's fun for us to dive into what do we want a little bit.
You know, I think often we talk about things that sort of cut and dry or black and white.
And, you know, where is the nuance in what's happening and how can we start, you know,
how can we lean into that to shape a future that we're excited about?
So oftentimes people say, oh, AI is having this impact or this effect.
And I think there was something that all the authors and also editors of the report was like,
well, it's not that black and white.
So individual productivity effects might not equal group productivity effects,
because it's just like really different to work on your own than working in the group.
It's also not, oh, the more AI you use the better.
Like more using AI more doesn't necessarily lead to productivity effect.
But as Jake already said, and was probably able to speak even more about, it's a lot about how people using AI and enrich ways.
When do they use them?
Do they use them before they're thinking about tasks themselves or only after?
So I think these would be two things that come to mind to me.
And we've certainly seen historically that technology, like to your point, Rebecca, the way that it gets adopted isn't necessarily the obvious ways.
you know, as you sort of bring it into systems, Jake, I know you've done a lot of thinking in that space as well with things like social media.
Yeah. And I think, you know, in some way you could think of this moment as AI's like social media moment, right?
The social media sort of was developed super rapidly. It was adopted super rapidly. It was, you know, optimized for what seemed like the obvious thing of like adoption and engagement at the time.
but I think there are these side effects of sort of myopically optimizing for one thing.
And, you know, we're now decades later and we, you know, it's hard to disentangle what happened and why, right?
And so I think when we think about AI and we think about the risks and think about things being, you know, is this a cut and dry case?
Is it good, is a bad?
So on and so forth.
Right.
I think it's important to step back and say, actually, it's up to us in terms of what future we design with it.
And the key to doing that is to not myopically focus on just the easy things, right?
It's easy for us to say, let's get everyone to adopt and let's boost efficiency.
Let's make everything really quick, right?
But I don't think that that's actually the future like we want to live in where everything is just fast, fast, fast.
And so it's really important for us to realize we're in control of this and to put in ability to measure and monitor the broader effects that these tools are
having so that we can steer things to the right course, right?
So I think it's like a real opportunity to learn from the past and to try to do something
different to steer our future in a good direction.
Yeah.
And are there specific things you're doing in your research right now to try and get ahead of
that or look to that?
Yeah.
I mean, I think the biggest challenge is to say, you know, in a, look, in a lab experiment or
in some very targeted field experiment, actually measuring effects on people is something you can
do somewhat well.
It's a hard social science problem all the time.
But now if you step back and you think about,
how do we do that in like the products that we create as,
you know, a big company at scale,
I think that's a really interesting, really hard research challenge.
And, you know, it's a, it's, the answer is going to be a combination of technical things
and social things and automated telemetry and surveys and tying all these things
together and figuring out how to do this in a way that actually works for an organization,
making and shipping products, I think is a really, you know, really important and really
challenging.
Yeah.
I wonder if there's things organizational leaders or even individuals should be doing in this
space as well.
Yeah.
Maybe I'll just say one more thing on this.
I think the more that leaders can emphasize that this is an important aspect of product
design, the better off we will all be.
because I think short of hearing that from leaders, it, you know, hard for that to happen bottom up because people have so much pressure to just build things and get them out there.
And so that's one thing that I think could make a real difference.
Yeah.
And some of this in some ways is like really building like complex AI literacy that isn't just short-term focused or myopic.
And, you know, in some ways, AI literacy shows up as a theme throughout the report.
Jenna, I know that's something that you've done a lot of thinking about as well.
I was wondering if you could talk to how AI literacy relates to some of the themes we've
been talking about and has impact at the individual and organizational level, particularly
as things are changing so fast.
Yeah, I love what Jake was saying about how, like, we need to be asking the right questions
and not just looking at how fast things work and understanding how people actually use it,
because people's own views of these tools impacts how they're.
they use it. And so we really want people to understand, like all people, at a basic level,
what these tools are, what they're good for, what they might not be as good for, what the pros
and cons are, what the risks are. And we all are seeing this play out in various ways. So we saw
in a study of software engineers this concept called the productivity pressure paradox.
And basically, they said to us, hey, we were given these tools. We were told we're going to be
so productive, but we don't know how they work and we don't know how to be more productive
with them, but our boss is awaiting more things. So I'm just going to double down on what I already
know and work even harder. And so there was this lift where when the tool was introduced, they
looked more productive, but it wasn't because they'd actually changed how they worked to take
advantage of it, because they didn't know how to do that. And we also know that how people feel
about these tools, like what they think they'll be good at. I think everyone enjoyed the meme of
asking Chachaputee, how many R's were in straw.
And those of us who know how they work, it's like, that's not really funny.
Of course, it's terrible at that, right?
But if you don't know that, then you're not going to ask the right questions.
And so we really want people to have sort of a basic understanding of, hey, what are the inherent
biases here that I need to be aware of if I use the model?
Is it going to point me down a certain path because it wants to make me feel great about
myself?
Or should I probe it a little bit more and be like, really, is this a good idea?
Like, how do I use it to make me most effective?
And I think we need to give people a bit of time to learn that.
And I think we definitely see this in organizations where the rollout has been quick
and the excitement has been high, but not everyone has had the time to really learn
and understand how within their own workflow and what they do every day and the way they work,
how these things can affect them and be productive for them.
Maybe actually picking up one thing that Jenna just said on this fact of how do people feel about using AI
or when they are just asked to use it.
I think this is also like a growing area of like research,
also within Microsoft but also beyond.
And really important is like,
what are the psychological influences of using AI on people, on users?
Also like across different maybe age groups.
What are the risks?
What do we need to care about?
And kind of like where do we need to set guard whales or similar?
Because I think there are these effects as well.
need to be researching those similarly as we are, oh, what are the productivity effects of these
things. There's also one interesting finding, I think, from the report was about the social
perceptions when people are using AI, that users that use AI are sometimes seen, I don't know, lazy,
less valuable when they're using AI. At the same time, everyone's like, oh yeah, but I'm also
asked to use it. Or there are also maybe some trust issues.
around, oh, should I make it transparent that I use AI or not?
So I think these areas of research are also growing in importance,
but also in how common they are.
Yeah, I mean, we've been really focused up until now.
A lot of the research has been like how individuals use the tool,
but what you're sort of hinting at there, Rebecca,
is like what it means in social contexts and in the larger system to use a tool.
what's some of the early research that's been showing up around sort of AI's using collaborative contexts?
I mean, this is a really exciting space, right?
Like we kind of, the report, the first AI report was a lot more on individuals,
and then we started looking at in the real world, and in the real world, we work with other people.
And so how these tools interact and mediate collaboration is definitely interesting.
I think one thing we've seen that Rebecca alluded to is that there's a lot of issues with perception.
So one study found if the same, like, writing material was given and you said a woman used it, used AI and wrote it, or a man used AI and wrote it, the woman was judged as being less competent, even though the text was the same. So some of these things that have always been around in our world, some of the biases people hold are like translating into this new world of AI and how then how I receive work that someone else did is being impacted by that. And one positive we see there is it seems as AI becomes.
more ubiquitous and people are like, yeah, it's a tool and it's great. They have less judgment
against others using it. But right now, some people are still, you know, nervous about what do I use,
what do I signal when I'm using it, and how am I going to be perceived? So even just within how
humans relate to each other, we're seeing it starting to have an impact on how they want to use it.
Yeah, it's interesting. You know, I think the metaphor we use for AI is super interesting. And I sort of
hear us playing around with different metaphors. And in some ways, you know, it's really,
important that we think about AI somewhat differently in that previously all of our interactions
with a computer were deterministic and we would like tell the computer exactly what we wanted it
to do and it like was screwing up if it couldn't count the right number of ours in strawberry.
And that's very different now. We have these stochastic models that we can communicate with
in natural language. In many ways they're much more powerful, but they're also not deterministic.
So I think sometimes we think of human metaphors, sometimes we call AI a collaborator,
sometimes Jenna, as I saw you were just doing, we're like thinking of AI as a tool and something we get things done.
I'd be kind of interested in like what the different metaphors you play around with in your research
and how you think that shapes the way, either the way that your research evolves and the questions you ask
or the way that people think about that.
Yeah, Jamie, I think it's a great point.
I mean, I think personally, and this is more just individual experience,
but it leaks over into some of the research designs and things we investigate.
We do have tremendous experience in dealing with like stochastic and not fully perfect systems in people, right?
And so one thing that I think has been interesting to reflect on being in a research org is like,
we're very used to having, you know, interns or students who have a lot of expertise,
but don't always get everything right.
And, you know, a lot of the time thinking about how to interact with and investigate what
that student has done is very similar to me in thinking about how to interact with
and investigate what an AI tool has done.
And I think it's made for a really comfortable transition to using AI tools in a research
org that, you know, I've seen in other contexts like in artistic or creative settings where, you know, these tools are totally, you know, sort of off limits or, you know, seen as bad or undesirable.
And I think developing this school, this skill of interacting with a system like this is going to be increasingly important.
And I think it is a useful, a useful metaphor.
How would you describe this to a very skilled but imperfect collaborator?
Yeah, we are actually currently writing up a paper from a study that we did last year,
where we gave two different trainings to two different groups, framing the AI either as a tool to collaborate with,
or more like a training on which was focused on the technical capabilities of the tool.
And we actually did see that then the group who was interacting with the tool in a more collaborative way
or thought of this of the tool more collaboratively,
did have a better experience but also led to different outcomes there.
So I do think there's a difference in how we experience
and also in which mindset we approach these tools.
And yeah, I individually usually try to see it as a tool
but want to interact with the tool and go back and forth
and not maybe just like accepting the first output.
but just like really iterating.
And I think this is also something that studies and research has shown
that this might be helpful for users.
And maybe also adding also to your question about like individual and collaboration,
I think one aspect that we also saw that I was,
I really find interesting is like how much more difficult it is to build tools for collaboration
or like group settings than for individuals.
Because it brings like so many new layers to it.
It's like, oh, we need to think about like,
social intelligence, what does the group environment is, which is like not there for like an individual use case. When do we want to use, when do we want AI maybe to step in in a group setting? How do we think about memory of the group? What is like some underlying maybe emotional or settings or like emotional context that the AI needs to be aware of? And it's just like,
so much more difficult.
And I think we also learning a lot about collaboration itself through this process.
Because of reason I was like, oh, what does collaboration actually mean?
Does it mean I work with someone or does it mean I work for someone?
So even finding out these nuances, I think is really, really interesting.
Yeah, I think that's a really good point, Rebecca, is, like in some ways,
the collaborative search space is so much larger than the individual productivity search space.
And we already have seen how much scale was necessary for a model just to start to learn some of the emergent underlying pieces of individual interactions with a model.
That's a real challenge and opportunity as we start thinking larger.
You know, Jenna, I was wondering in the software development space, whether you're seeing, especially in collaborative context,
sort of interesting metaphors or ways that people are using AI, because that's,
a place where we see super early adoption and can get good insight for future productivity tasks as
well. Yeah, we did a fun study this past summer where we looked at people who had the same
context. They're in the same team. They work in the same code. They have the same manager, but where
one used it a lot and one didn't. And we interviewed them to understand their kind of perceptions and how
they viewed this. And what we found is that the people who use it more do view it more as a collaborator and
less as a tool. The folks who saw it as a tool then assumed it had a purpose. So like,
you know, the expression, when all you have is a hammer, everything's a nail. So if this is
just a tool, then I got to find the nails and that's the only place I can use it. But if it's a
collaborator, then if it's not working, they would take on a position of maybe it's me. Like,
I should try prompting it differently. I should give it new context. Like, there's got to be
some way to get this thing to work in this context. And so I'm not going to
give up. So we found that the people who viewed it in that way as a collaborator where it could
get to the right answer. And we even see what the model sometimes you just have to encourage them
and tell them like, no, you can do this. And then it'll give you the answer. It's really funny.
And so we've seen, yes, with the developers, the ones that just kind of stick with it. And, you know,
as Jake was saying, see it as a collaborator that can do different things. They, they tend to benefit
it from the tool a lot more, and they have a broader idea of what it could potentially do,
and they use it in a lot more context. And so then they enjoy using it more. So I like the, you know,
I think it's useful to think about we want to break out of the deterministic context. And so it's
useful to think of AI as a collaborator. It's certainly aligned with our notion of like AI helps
bring out the best in people. I wonder if this sort of slightly anthropomorphic metaphor limits our
imagination in some ways as well. AI certainly can do things that humans can't. You know,
can operate at scale. All of a sudden, you can have natural language across hundreds or thousands
of people easily synthesized. It operates super fast. You can generate new ideas and different
perspectives very quickly. I've been trying to think of like, what are the next metaphors that
will help us break out of our sort of limitations of thinking about working with people. I don't
know if you all have any thoughts on that space. Not yet. I would be interested if you have
already doing it. I don't have an answer. Well, Jamie, I saw your post on like how AI is not like a
human and how considering those differences is more, can be effective or can help us break out of it.
And I found that really exciting because something we're seeing, I think, is a lot of companies and
people are looking to automate something a human already does and do it faster. Like what Jake was saying,
do we just want to be faster at everything?
And that's easy because we can observe what a human does.
We've probably already been measuring what a human.
Yeah.
So we can do that.
But when we start to think about what can it do that humans can't do,
that's sort of where I think we need that imagination,
where we start to think, okay, this is totally different than anything I've done before.
And I love space, and it makes me think a lot about space exploration.
Like, it's not like we used to go to space slowly when we didn't have electricity and computers,
right?
We just didn't go to space.
Like you looked up there and you thought that would be cool someday.
And then this whole field opened when we got this new technology.
So I do think a lot about what are not just things that I can do better, faster, or in parallel,
but what could I have never done before that I can now?
And I think that's where all the open and exciting parts come to be.
I just don't know the answer.
Oh, and I love your metaphor, Jenna, because I actually, I keep watching Star Trek, the next generation.
and like actually talking about these different chapters that the new future of work report has, it's been amazing because like when I watched it during the pandemic, it was perfect because in some ways it's this like really small closed community that travels the world.
You know, so it's sort of like exploring but like being a small community and then now obviously with AI, the computer and data and all the ways.
And I do think that that offers a really positive sort of view of the future.
And, you know, as we begin to close here, I thought it might be fun for us to take a moment to really think about this moment that we're in, how we work, how we see other people working, the research that we're reading and doing.
And think about what the ideal new future of work looks like.
What are we creating and how do you want to contribute to it?
Jenna, maybe you want to kick us off?
Yes, with this easy question.
So yeah, I definitely think-
Solve the future of work.
If we get to see that.
Yeah.
Well, what's great about it is that we can ask the question, right?
It is not predetermined.
The future of work is actively being built by us, by consumers.
I love that.
And so I do like to picture a future of work where humans are flourishing with AI
and where humans still get to do meaningful work.
So one of the work streams we have in the future of work is on meaningful work.
And we know that when people do work,
that they feel connected to.
Societies function better and people are happier.
And so I don't want a future where we replace work with agents.
I really want a future where AI allows humans to thrive more,
to still be front and center and to be doing things that change the world.
So I'd be very excited for AI doctors and working alongside humans to maybe cure cancer.
You know, that would be excellent.
That was my first crack.
I didn't succeed when I tried.
So maybe now we can.
But that's kind of the future where it's both economically valuable, but it's also meaningful for humans in the world.
And that's the future that I'm hoping that we're painting with our reports and with our research.
Thanks.
Jake?
Yeah.
Yeah, Jenna.
I think like a huge plus one to the human flourishing aspect.
And I think sort of in a way that this is like the broadest and best interpretation of Microsoft's mission statement to empower everyone to achieve more, right?
I don't think it means, like, write more documents and check off more tasks.
I don't think that's the version we should be going for.
I think it means do more of the stuff you're passionate about and less of the stuff that you're not,
so that, like, the future of work is that it doesn't feel like real work.
It doesn't feel like the slog, and you get to do the stuff that you're, like,
flowing and enjoying and time flies by because you're just loving what you're doing.
And I think that's the future we want.
I don't think it's going to happen by accident if we just, you know, work on the more faster sort of thing.
And so, you know, I really hope that the work and research that we all do can contribute to that version of the future.
Because I think we'd all be much happier in it.
Yeah, I think the two of you have already said this really beautifully, and I say just like plus one to that.
I also see like the, I would love the new future to be a future.
way I makes the human parts of work more visible but also more valued and a future where humans
are able to bring in their creativity or explore new ways of creativity bringing in their human
judgment guide directions and we yeah setting like intentions I think this this would would be
really great and yes the two of you have already said like
humans or seeing humans flourishing and feeling that they weren't as meaningful, I think it's just
like great. Great. Good. And then to finally to wrap things up, I've got a couple of lightning
questions, quick questions, quick answers, but they're actually quite hard questions. So just
share what's top of mind for you. Don't worry about it. I'll ask them. And then like, Rebecca will
start with you, then Jenna, then Jake.
So, Jake, you've got it easiest.
We're giving you that a few seconds to think about things.
What they said.
But, yes, just what's top of mind for you.
What's one misconception about AI at work that you wish you could retire today?
The more you use AI, the more productive you are.
I think that's similar to mine, which is that if you want, if you're.
you give someone these tools, they'll all be 10x more productive because the tool itself is good.
There's so many other factors, how they perceive it, how others perceive it, how it fits into their
workflow. It's not just giving people an amazing tool that's going to change productivity.
And mine is just to pull up, I think, what both Rebecca and Jenna have already said earlier,
which is like it's not all good and it's not all bad. And how we design and use it really matters.
That's up to us and we can steer it to be better or worse.
Great. Question number one. Now we're on question number two. What's one finding from the report that you hope becomes widely understood?
I think we keep benchmarking against the past. So what can I do or can I do what we already do? And I think this is like a mistake or maybe only the first step and the more important step comes next. It's like what can I do or help us with that we can't do yet?
For me, as the editor, I have snuck the same slide into the report for the last three years,
and that is Eric Brinjolfson's diagram of the space of innovation.
And the idea there is just that the opportunities for augmenting humans are far greater than for replacing or automating them,
and that there's more opportunity, more task, more economic opportunity in that bigger space.
I love that and totally agree.
And I'll just point to one of my favorite slides in the deck, which is on,
on the future of computer science education.
And I think, you know, there's this thought of like, oh, you know,
the dawn of AI is the end of computer science education or people needing to know computer science.
And this, I think this slide that we have in there does a great job of talking about how it's actually just a redefinition of what we mean by computer science.
And pulling things to a higher level of abstraction, thinking about computational thinking, problem solving,
thinking clearly and breaking things down, you know, algorithmically.
And I think that's a great shift.
and I'm excited to embrace it.
Awesome.
Third and final question,
and Jake, you're already half of the part of the way there.
What is one thing you are genuinely excited to research next?
Yeah, so I can tie it into something that I've personally been working on that computer science angle.
And I think giving teachers the ability to control and have visibility into what their students are doing is something we have not broadly done and made accessible to people.
It's something I developed and tested for my own teaching this year
and have also worked with a bunch of academic collaborators
on randomized controlled trials with.
And I think just the sooner we can get that into every teacher's hands
so that they are not just subject to whatever their students are doing
with whatever tools, the better we can correct what's going on.
So I am very excited to work on that going forward.
Yeah, I would say we have spent, or we as a community,
both like in companies but also academia
has spent a lot of time now
on what AI can automate
but I would be excited
and love to learn more about what people want
AI to maybe help them with
and kind of like leading to like going back to the question of like
what does the new future of work
the ideal new future of work look like
for like the human workers and the individuals
and learning more about like these impacts
and guiding in this directions
Oh, for me, I think in the software world, we are seeing that since people can do so much more and they don't have to do the boring tasks, their brains are just never getting a break and people are feeling sort of burnt out. And I'm very curious about how we can take advantage of AI and do more without running ourselves into the ground because we're not AI, right? We're people and we have requirements and needs. So I'm really excited to see how we can take advantage of what is uniquely AI and then what is uniquely human.
help people to flourish like we talked about.
Thanks, Jenna, Jake, Rebecca.
I appreciate all your time today.
And to our audience, thank you as well.
If you want to learn more about the report and how AI is changing how people work,
visit aka.m.s.
slash NFW.
And that's it for now.
Until next time.
