Behind The Tech with Kevin Scott - Ask Me Anything with Microsoft CTO, Kevin Scott
Episode Date: February 18, 2025In this AMA episode of "Behind the Tech," Kevin Scott and Christina Warren address a variety of listener questions, ranging from the impact of AI on learning and personal projects to the future of sof...tware development and AI regulation. Kevin shares his experience using AI for personal projects, such as making Japanese tea bowls, and discusses how AI has changed the way he approaches both work and hobbies. The conversation also touches on the potential for AI to reshape software development, with Kevin emphasizing the significant changes AI will bring to the field and the importance of adapting to these changes.  The episode also explores broader topics, such as the regulation of AI, the challenges of scaling AI in regions with limited technological infrastructure, and the role of creative leaders in the era of AI. Kevin highlights the need for consistent and agile regulation to ensure the safe and beneficial deployment of AI technologies. He also discusses the democratization of AI tools and the importance of connectivity in enabling access to these technologies. The episode concludes with a discussion on the evolving definition of a technologist and the blurring lines between technology and creativity, emphasizing the importance of human involvement in AI-driven art and innovation.  Kevin Scott   Behind the Tech with Kevin Scott   Discover and listen to other Microsoft podcasts.   Â
Transcript
Discussion (0)
Welcome to Behind the Tech. I'm your co-host, Christina Warren, Senior Developer Advocate at GitHub.
And I'm Kevin Scott.
It is time now for our AMA episode. And so, for the past couple of months, listeners have been sending in some really fantastic questions.
And we cannot answer every single one that we got, but we are so appreciative of all of you who sent in your questions.
This is going to be a super interesting conversation.
Here's our question from Ravinder.
How has your pace of learning changed in the era of AI?
What's been the coolest thing you've done with AI personally?
Yeah, I've definitely been using AI a ton for
the projects that I'm doing outside of work even.
So like a bunch of things that it gets used
for at work that are hugely useful,
but the outside of work ones I think are fun.
So maybe the coolest thing that I've done is I have gotten really into
making Japanese tea bowls in my ceramic studio this past year,
and I have been researching how to replicate some of the results in traditional, classic classic Japanese Raku tea bowl making, which has
involved me making my own kiln, devising my own
glaze recipe, and even devising a way to take a
clay body that you make the bowls out of and making
it tougher so that you can handle all the
thermal cycling in this crazy firing process.
And I will tell you that Copilot was amazingly useful in all of that, particularly with the
kiln design and with helping get some ideas and make progress on the glaze chemistry for
this glaze.
That's so interesting.
So what do you do with Copilot with that?
Do you just have a conversation and just ask questions
kind of back and forth about maybe how you want to design stuff?
Yeah, I mean, I basically, for the glazed design,
I told or I asked Copilot, I was like,
I've got a set of T-Bowls that I am firing
in the classic Raku style at 1,100 degrees Celsius,
where I'm going to take the glazed vessel,
put it directly in the at-temperature kiln,
leave it for three minutes until the glaze goes cherry red,
and then pull it out to air quench.
I gave it a few hints about what I had been thinking about.
What I know of the Japanese,
this is the interesting bit.
The traditional Japanese Raku glazes use
lead in them to get
the elements of the glaze to melt at a lower temperature. Obviously, I don't want to be using lead in my tea bowls.
Even though there are safer variants of lead that you can use,
that are safe in ceramics,
but I didn't want to and so you need to use something else like boron.
Figuring out how much boron you use and in what form
the boron comes in and the glaze is a little bit tricky.
It was super helpful. It felt like
a real conversation that I was having with someone who
knew a little bit something
different about glaze chemistry than I know.
That's genuinely, this is so fascinating.
Also, thank you for sharing all of your many interests with us
because you're such an interesting person.
I never would have thought,
I know how much of the makers have to run to,
but building your own kiln and making
Japanese steeples and using AI to get more information about this.
I love it. That's a get more information about this. I love
it. That's a great use case of AI. I love that. Great stuff. Thank you again for that question.
All right. This question is now from Rafael and he asks, do you believe that in the future,
AI will completely reshape the way that we produce software? And he goes on to say,
I mean, could we eventually get rid of the development tools that we use today and
rethink the entire process from scratch,
creating a completely new approach to software development?
Seems likely.
Yeah.
I'm an old enough fart where,
it sounds disturbing to say,
but I've been programming for 40 years.
I'm 52, I started when I was 12.
In 40 years, particularly the past 40 years,
software development now, even without AI,
doesn't really resemble much at all what
software development looked like in the 1980s.
I think it's a safe bet that
software development is going to reform itself over
the next handful of years and I'm, I think,
just super clear that AI is going to
change the way that we write software.
I think, yeah, they're just sort of all of
the obvious ways that it's going to change things.
Coding is a complicated activity and it always has been
like this thing where you've got an idea in your head
that needs to be sharpened and then you need to get
the sharpened idea out into
a form that the computer can go execute.
The thing that's really changed,
and I've said this before in public, I think,
is the way that we've been building
software hasn't really changed since Ada Lovelace,
like this whole process of algorithmic thinking and
understanding the complexity of a machine,
like all the way down to like its atomic details,
and then using that understanding of the machine to
transform this idea that you
framed algorithmically into a program that the computer,
we've been doing that for almost two centuries now,
and there really hasn't been much of an alternative.
Our tools have become increasingly more powerful,
but it's basically that.
You want a computing device to do something for you,
you either figure out how to do that process yourself,
or you have to hope that someone who understands how to do that,
has written a program that you can run yourself.
I think the big thing that's changed with AI is now you
have a thing where you can
describe a thing that you want accomplished,
not necessarily even in algorithmic terms,
and then AI can do some or all of that mapping
to get the computer to actually do the thing for you.
That really, really dramatically changes
how we think about software development and who's a developer.
It changes what it means that we're building.
For instance, I was just having
this conversation with a bunch of engineers,
I don't know that you need apps in this world.
An application is a by-product
of this early thing that I just described,
that someone has to understand
a set of problems that a group of people want to accomplish,
and then they just edit a bunch of code together into
this thing called an application that does those things in
a general enough way that those people
can get some value out of it and be able to use it.
I don't know that you are going to
need that too much further in the future.
You'll still need the capabilities that are in the applications,
but the user interface, like telling someone they got to go learn all the complexity of
some software because they got to navigate some weird user interface,
information architecture to get a thing done versus just say what they want done.
That's changing clearly.
And that has an implication for the software development as well.
Yeah, it does.
I mean, that's what I think about, right?
Because obviously, I think you're right.
It could change completely how we define a developer,
which is something that we've been trying to do in various ways for a long time.
But now, we finally feel like we're maybe on the cusp of really broadening that concept.
It really feels like that could be a reality.
But it does make me think about on other levels,
so how do you design programming languages or do you,
or how does that change?
What matters then about the underlying code beyond that?
If we are able to just create things based on our natural language
and based on what we want and make updates
iteratively with multiple people at once.
How does that change how we design those underlying systems?
I think that's really interesting to think about too.
Yeah, 100 percent and super useful stuff to think about.
The trick is, and this has been true about
software development forever is you want things to compose.
AI is still a pretty far ways away from doing
this grand vision that I just articulated.
What we really need to be thinking about between
now and whenever that happens,
if it actually happens the way that I imagine,
is how do you take tools that are on
some spectrum
of classical software development tools to
this new AI future and make sure that all of
the things compose together in reasonable ways so that
developers can then take all of
this stuff that's in their toolkit and get
the thing built that they're hoping to be able to build.
Yeah, totally. I mean, Lots of things to think about,
and I totally agree with you on that.
We've got this question from Veronica,
shifting topics just a little bit,
still around AI, and she wants to know,
how do you suggest we regulate AI?
Should this be done at the federal or state level,
and how can we ensure that AI is safe and secure,
both from a public and private standpoint? Great question.
Yeah, I think it's a super great question.
Again, I've said this as well,
and I think I talked about it even some in my book.
Of course, any technology as powerful as AI needs to be regulated,
and it would be just an odd thing in
the course of human history if you had
something this powerful and it wasn't regulated.
The thing that you want to do though with regulation is,
I think, consistency is helpful.
That's where federal regulation Federal regulation that is consistent across all the states
and even international standards would be super, super useful.
Because regulation,
good regulations intent is to get
beneficial technologies deployed to those who will benefit from it
as quickly and safely as humanly possible.
So you don't want unnecessary complexity in
the regulation itself because that prevents
the whole beneficial technologies
getting to whom it benefits.
But yeah, I think in general,
we will need our regulators to be pretty agile in making
regulation that can encourage
the most beneficial things for the broadest number of
people to get to the market as quickly as possible,
while at the same time
being careful about what the downside risk are to a bunch of things.
In a bunch of places,
the biggest downside risk honestly is failure to deploy quickly enough.
There are, for instance,
a whole bunch of medical things right now where the models are strongly superhuman.
I've had some experience with
my own mother in the past year with the healthcare system,
where if she had had access to the most advanced AI tools,
a whole lot of suffering could have been reduced.
Yeah, it's lots and lots and lots and lots of people
are in similar situations where it's not some theoretical
future where stuff could be beneficial.
It's now that it could be beneficial.
How do you think we go about, I guess,
educating or ensuring that our legislators are aware of what the potential, I guess, educating or ensuring that our legislators are aware of what the potential,
I guess, both opportunities and risks are in this area, right? Because this is something I think
about a lot. I agree with you, regulation is super important and it needs to be consistent.
But I do sometimes wonder, I mean, it's hard enough for us as technologists to keep up with
all these things. How can we do a good job of making sure
that the legislators are informed?
Yeah, I will say the thing that I'm most encouraged by on
this front with AI is more so
than any previous technology that I'm aware of.
You have practitioners in the field spending a whole bunch of
time talking with people in the academy and people in government,
trying to make sure that they have
the information that they need in order to make good decisions.
I see people doing it in very respectful ways.
Now, obviously, everybody who's coming at it,
whether you're in the government or you're in the academy,
or you're in the industry,
you're obviously biased in some way.
So we all need to be as clear as we possibly
can about our biases and lay them on the table.
But just because you're biased
doesn't mean that you can't get information out there.
Then have someone adjust for the biases,
look for what the through line is and everything,
and then make good policy decisions.
That's a way better way to be than to
not be
transparent about what's going on or
decide that you're not going to talk to
somebody because it's not your job.
I think right now in tech,
anybody who's working on AI,
like part of your job is to, when required,
patiently explain what it is you're doing,
why you're doing it, and how it works.
Great stuff. All right. So this is a question from Mui Gary,
and this is really good.
How can large language models be scaled effectively
across regions with limited technological infrastructure?
So think about places like African nations.
Like, what are some of the biggest hurdles
for AI-powered educational solutions to move, you know,
beyond prototyping and into full scale
deployments in underserved regions and how can these challenges be overcome?
Well, I think the news there is probably pretty good.
So if what you want to do is to build an AI application, it has never been easier than it is right now to go build one.
You have more choices about very powerful models to access.
You have models that are available behind APIs that are hosted
where you sign up for a developer key
and just start making requests.
You have a huge catalog of open source models that are
on a spectrum from general purpose to very specific tasks,
design things.
You just have a lot of choice where you don't have to start by saying,
I've got to train a model from scratch.
Right.
And so I think that is a huge advantage.
Like it's definitely not the way things were 20 years ago when I wrote my first machine
learning programs.
You know, it isn't even how things were three or four years ago.
Right.
I was going to say it's a lot different even then, right?
It's much easier for people to build really good things now versus three or four years
ago to your point.
Yeah.
I mean, my boss Satya Nadella tells stories about his visits to India recently,
where he has seen the just rapid diffusion of
AI applications at a pace that he's never seen before.
The thing that he says,
which I think is really good is,
there are parts of rural India where
the industrial revolution still hasn't shown up after 250 years,
where they already are seeing the diffusion of AI,
where a farmer through their mobile device can access
a powerful AI system that will help them understand how they are
entitled to government programs and then go sign them
up for them so that they get
these benefits that their government intended them to have.
That's just a shocking rate of diffusion.
But it's also not all good news.
I think while the expertise required to build
an AI application is democratizing super fast and you've
got, like, high levels of accessibility to the APIs and,
you know, basic infrastructure required to go build them,
you still have to be connected.
You still have to, like to have some baseline level of
technology fluency in order to be able to use the systems.
The reality is there are large parts of the world that are
not yet sufficiently connected and where
that technology fluency isn't as good as it should be.
There's a bunch of,
at this point, deeply unsexy work that we still need to
prioritize and make sure that we're focusing on things
like just rural broadband.
You know, like I've definitely told this story
before, but yeah, my mom and brother have good
internet in this rural town that they live in in Central Virginia
because they're lucky enough to live within 100 yards of
the local Telco exchange.
My uncle who lives just a few miles away from them is
still on some crazy 300K DSL connection.
His Internet is barely usable.
He has to come to my mom's house to do things on the internet.
So nuts. That's the thing
that I think we really have to pay attention to because
as the things that you can do and
the capabilities you can access with
that connectivity become more powerful?
Like absence of connectivity becomes a bigger and bigger disadvantage.
No, I mean, I think you're exactly right.
And this is a conversation I feel like, you know, we definitely talked about this on this podcast,
but I feel like we collectively as an industry and society have been talking about this for at least 20 years and it's
only becoming more and more important to start to really invest in overcoming these infrastructure
challenges just because connectivity is only going to be more important.
I think that's a great distinction that it's easier than ever to build applications and
things with these tools, but actually getting it to people and making it so that they can interact with them
is maybe the less fun part, but
arguably even more important because without that, we, you know, all of this is,
is moot. Yeah. Yep. All right. Question from Peter.
He asks, I am curious about how Microsoft approaches
running technical tests against its own infrastructure,
LinkedIn, Xbox, Office 365 and others.
Given the scale and complexity of these systems,
how many lessons have you learned over the years
while managing that infrastructure?
And he goes on to ask, and for those of us in DevOps,
what is the most surprising lesson
that you've encountered that might catch us off guard?
Oh, God. That's a super good question.
Very complicated, so I don't know whether I'm going to be able
to answer the whole thing.
I was going to say, if you want to take this in parts, do that.
That's okay.
Yeah. Look, so I had a boss who was maybe the best
DevOps leader I've ever worked for or with in my career.
He had a bunch of very simple things that he would
say about philosophically how you should approach DevOps.
One of the things he said is,
you can't fix something or improve it if you're not measuring it.
So a lot of the answer to the question,
it just boils down to like,
are your metrics good?
Are you measuring everything that's happening in your system?
Do you have good monitoring built on top of the metrics?
Do you have good visibility into
the internal state of all of the systems?
That's one thing that's super important.
Another thing is complexity needs to have a reason.
A lot of times, complexity just emerges because
the most convenient thing to do to
systems architecturally is often to just
pin new stuff on to old rather than to do the harder work of,
okay, we've got some evolved requirements here.
Things are different from when we originally designed this system.
Now we need to just push pause and go refactor the whole system and make sure
that it's designed in
the simplest possible way to meet
the new set of requirements that we now understand.
One of the things that I've always tried to do in
the organizations that I've led is to make sure that you are
reserving some amount of
your engineering capacity to go deal with tech debt.
That you've got teams who are building
shared infrastructure whose job it is not
just to provide a set of services to everyone,
but to be building things in
a really architecturally simple way,
and to make sure that things are robust, maintainable,
scalable, secure, fault-tolerant,
all of the things that you want out of your systems.
You just got to rebuild stuff every now and again.
It's painful as it may sound,
when you've got product managers screaming at you that you need to go ship
this new feature or you're eyeballing
short-term revenue or something like that,
to go tell all of your stakeholders,
we got to go push pause on this for a little while,
while we re-architect this thing.
You just have to do it because complexity really is the killer.
There's a bunch of stuff that we're doing with AI right now though to deal with some of the complexity.
When you have complexity in systems, it's irreducible.
You just can't figure out how to design away from it.
AI can help manage some of the complexity.
It's not in a way where you're letting this AI be
an abstraction layer that sits between you and
your understanding of your system but to help you just
very quickly triage things or figure out
how to root cause operational issues or whatnot.
It can be super helpful with stuff like that.
It can be super helpful with stuff like that.
Yeah, I mean, I can go on all day about this particular bag of issues.
But yeah, I mean, you just got to test, test, test.
Here's a thing, maybe this is,
I have gone into situations before where people have
built systems or built functionality that are designed to
do a thing in rare circumstances,
like data center level fault tolerance, for instance.
What happens if this whole data center goes down,
if it loses power or if there's a fiber cut or something,
where the team tests the functionality once,
and then assumes that it's going to be available
forever and ever just because it worked one time.
Right.
So yeah, you got to test for
infrequently occurring things and make sure that
when the infrequently occurring thing happens,
that you are ready to go,
which basically means you need to simulate
the infrequent thing more frequently
than it will naturally happen.
And that's like a counterintuitive thing, I think, for some folks.
Yeah, no, that is.
But I like that.
I think that probably answers this question really well, because that does seem counterintuitive,
but it makes sense, right?
Like, you need to make sure that when this actually occurs, that it's going to work.
But to do that, you've got to have, it's kind of like fire drills, right?
Like, you know, You do them, hopefully,
much more frequently than they actually occur,
just in case, if you need to be ready.
Yeah. I was just going to say,
at LinkedIn, we used to,
at random points every week,
just take a whole data center offline to
make sure that all the fault tolerance systems would work.
Okay, that's awesome. That's wild.
And was that a process that started before you joined or was that something
that you asked them to do?
I'm just curious.
That was a thing I asked them to do.
Amazing.
Amazing.
And was it for that reason, just because you wanted to ensure that, you know,
Yeah, it was correct. Was it for that reason just because you wanted to ensure that resiliency?
Correct. Because resiliency is a super hard thing to achieve,
so it is not a service that you can just sign up for and get resilience.
It basically means that every single thing that's
running in the data center has to be resilient.
It has to be prepared to deal for things to
fail in the worst possible way,
which means obvious things for things like databases,
and networks, and storage systems, and whatnot.
There's a bunch of super classic computer science and
engineering stuff that you can go
do to make those things fault tolerant. But you also have to make those things fault tolerant.
But you also have to make your applications fault tolerant.
What happens if an application server
that's rendering the user experience to a user,
what happens if it loses all network connectivity?
What happens then?
Is there some routing layer somewhere, like maybe in the end user application that the user is using that will notice that its connection back to its application server is no longer responsive and it routes it sideways to another server somewhere in the service catalog,
in another data center.
You just have to think through all of this stuff.
How is every single piece of this system,
and you have to have every single service owner
accountable for having done that work.
A real good way to make sure that they've done the work is without telling them you just kill the whole system and like you'll know real quick whether their application's
robust or not. I love it. I love it. I'm glad that I got I'm glad you implemented that. I mean I
think it is a testament to LinkedIn that it is one of I've covered many of these services and
you know worked at companies you know that need to be online a lot that have not always had great uptime
LinkedIn is one of the ones that has at least in my experience a very very good
You know uptime and those sorts of things and I think that's probably a testament to the the drills now, but not not always
Yeah, well, but that's how you get there, right? I guess is by the habit of just
At the top of the, it could be gone.
How are you gonna recover?
I love that, I love that.
All right, this question is from Samantha and she asks,
I've noticed you've had a few recent guests
that aren't typical technologists
like Bin Laden and Rafik Andal.
Could you share more about your thinking and perspective
on how more creative leaders
are working in the era of tech and AI?
Yeah, part of it is,
just to be perfectly honest,
these are people that I want to talk to,
and I think the conversations are
interesting and I want to share them.
But I think there is
this thing that we have been talking about,
which in the era of AI,
this distinction between who's a technologist and who
isn't is blurring in a really profound way.
I think it's good to be talking to
a broader variety of people because you have like Rief Rafik,
for instance, is a trained artist,
but he's using technology in incredibly sophisticated ways
to realize this artistic vision that he has.
I think there's just going to be more and more and more of that over time
because this previously daunting and inaccessible technology
is becoming less daunting and more accessible,
and which means that more people are going to be using
it to do a broader swath of things.
So is Rafiq an artist or a technologist?
Maybe it doesn't matter.
He's just doing amazing stuff.
This conversation I had with Ben Laude is,
I think all the time about what the nature of art is,
and what's the difference between art and instrument,
and what's the boundary between performer and instrument.
So I think it's interesting to have
artists come in and talk about how they're thinking
about those relationships, you know, that they have had
in their art and in their craft for a very long while.
And then, you know, how that thinking is changing in an era of AI.
I just feel like they're
super important conversations to have right now.
No, I think you're right. I think that breaking down maybe this demarcation in places,
I don't know if it matters with yes,
that could be the answer to both questions.
Because these lines, I mean, when technology truly becomes accessible
and kind of something that we all sort of kind of imbibe,
it becomes just a part of us, right?
Like, and I think that the oftentimes artificial barriers
that we put into place disappear
and it's
just like you're a creator, you're a person, regardless of how you get there and what you
do.
It doesn't have to be, oh, I have to be in this box or this box.
It's like, no, I'm just creating.
The thing that I will also say is I have super strong opinions about some things.
For instance, I'm not interested in AI at all,
absent a human wielding
the AI to do something interesting.
Now, I'm not claiming that everybody needs to be my way,
but it's just interesting to me that this isn't
a point of view that I came to through some huge process of deliberation.
It's just like I am not interested in the idea of some autonomous AI,
like spitting out art or music or whatnot,
absent the hand of a human creator because I've discovered
part of my connection to the experience
of experiencing art in the first place is like,
I like to know, oh,
this is the human and this is how they made it,
and this is imagining what they must have been thinking,
and are we alike, are we different?
You like the story. Yeah, I like the story.
And the story of like, yeah, you know, the robot made this like,
who cares?
Right? No, and I think that's a great point, right? And that's
a really interesting perspective because obviously, I mean, I
think there's an argument made that there is an artistic that
could be made if it were completely, you know, autonomously
generated and that's an interesting thing to have
But I tend to agree with you like the stuff that I'm interested in consuming the most
Outside of kind of like an abstract level is definitely the stuff that has been guided by a human
But if the technology if the AI can make things more unique or or or effective or just add a different nuance to something,
that can lead to a great outcome or interesting anyway.
It's an interesting debate because I don't know whether I'm right,
you're right or I do actually have this argument with people who will say,
hey, you're crazy.
You could have something that's interesting
and artistic and merit-worthy that doesn't have,
and it's like, okay, great.
The argument is interesting, right?
It tells us something about what is the nature of these things.
No, I think it does. Yeah, because I can see both perspectives.
I tend to, I think, align more with you,
but I can understand the philosophical argument about it.
But I think that for a lot of us, still what ultimately binds us to things is not just
the output itself, but the everything that comes before it, which is the story, and is
the thinking about what went into it, and is, frankly, in some cases, the imperfections,
right? And that is something that,
not to say that it couldn't be there because who knows where AIs might be in decades,
but that doesn't seem to be the direction
that a lot of those things are now.
And so instead though,
I think it's interesting to think about
how these tools can be used,
not to just clean up imperfections,
but to maybe continue to let those things be there,
but maybe show off other ideas, I don't know.
All right, this question is from Kathleen,
and she says, I've been hearing a lot about agents
being the next AI frontier.
What can you tell us about what that will look like,
and when can we expect to use AI in that capacity?
So great question.
We all want to know when are
the AI agents going to be able to run our lives, Kevin?
I don't know for sure.
I think it's important to be more specific about what it is we think agents are.
So in a way,
co-pilots are agents,
but they're agents that can help
you with relatively speaking small task.
There might be a lot of them that you're
doing and they may be very important,
but right now, the things that we can delegate to AI are relatively small,
like small software development tasks,
like small productivity tasks.
What eventually, if you are excited about this notion of agents,
what you want to be able to do is to think about an agent as
a real fully
capable peer or collaborator or coworker.
You want it to be able to collaborate with you in
very broad and very capable ways or you want to be able to
delegate big things, not just five-minute tasks,
but five-day tasks.
Go completely autonomously,
build this whole application for me,
and come back with a PR you want me to review,
and something that I can test,
which you might do to
one of your fellow software developers, right?
Right.
So look, I think we're definitely moving in
the right trajectory to have these agents,
which in our parlance we call co-pilots,
become more and more powerful and capable over time.
I think we're feeling really good about reasoning capability.
We are beginning to make progress on actions and tool use.
We've seen a little bit of that in the past year,
and I think you're going to see a bunch of it in the coming year.
We are seeing really interesting things happening,
I think, and have a lot of things that we can
expect to see in the next year on memory.
A lot of what happens now with these agents is they're very transactional. So they have enough information to do
a very specific task in a very specific context.
But in order to have them be more generally powerful,
they have to really have complete memories that persist over time.
Then we've got a whole bunch of plumbing work to go do.
In order for the agents to be able to do things,
like just even beyond basic tool use where they can
take action on your behalf or where they can go use
a tool to assist them in
accomplishing the task that you've set them off to go do.
You really do have to think about what
entitlements look like in this universe.
How do you make sure that
the agent has access to what it needs to have access to in order to complete the task,
it's been asked to do,
and how do we, the humans, reason over those entitlements and
get things both available and permission correctly.
But look, I'm seeing lots and lots of progress,
and it's hard to predict the date when agents with capability level X is going to be there,
but I think it's safe to assert that we will see
increasingly powerful agents in a variety of
different forms emerge over the next year.
Sounds good. I think that's probably a good hedge. And I do look forward to the
day that the robot overlords do truly control my life, but until then, I'll be honest. I'm
kidding. I'm kidding. I'm kidding. But I'm glad to know that progress is being made.
Okay. That does it for our AMA episode. Thank you again so much to everyone who sent in these excellent questions.
Really, really good stuff.
Thank you, Kevin, for your answers.
Really, really interesting.
Please make sure to follow Behind the Tech on YouTube or wherever you listen to podcasts.
And if you have anything that you would like to share with us,
you can email us anytime at behindthetech at microsoft.com.
Thank you so much for listening.
See you next time.