Closing Bell - Closing Bell+ Microsoft CEO Satya Nadella 11/15/23
Episode Date: November 15, 2023Microsoft is introducing its first chip for artificial intelligence, along with an Arm-based chip for general-purpose computing jobs. Both will come to Microsoft’s Azure cloud, Microsoft said at its... Ignite conference in Seattle. CNBC’s Jon Fortt spoke exclusively with Microsoft CEO Satya Nadella about new AI chips, the rise of AI, and more.
Transcript
Discussion (0)
Satya, thanks for having me back here in Seattle.
You just got off the stage minutes ago.
Thank you so much, John, and thanks for coming out here.
It's sort of becoming a great habit for you to now be showing up multiple times a year.
We love it.
It is indeed. Well, to talk to you, of course.
Big announcements here.
A year ago, OpenAI put out ChatGPT,
and your stock is up around 50% since then. What's been
the most significant first wave of adoption in AI for you? You talked a lot about co-pilots today.
General public and investors probably don't think about those as much, but strategically for you,
has that been the most significant? Yeah, I would say both,
John. I mean, there are two real breakthroughs in some sense with this generation of AI. One is this natural user interface that, you know, the first time people really got a sense for it was
when ChatGPT launched, right, where there is a complete new way to relate to information,
whether it's web information or information inside the enterprise, and that's what we're mainstreaming with our co-pilot approach. And that definitely has caught the
imagination. It's becoming the new UI pretty much for everything or the new agent to both not just
get the knowledge, but to act on the knowledge. But the other thing that's also happening is a
new reasoning engine. Just like, say, in the past, we thought about databases. We now have a new
reasoning capability, which is
not doing relational algebra, but doing neural algebra. And that, you know, you can take an API
and you can continue a paragraph or you can do summarizations or predictions. That's a new
capability. That's going to change pretty much every software category. So between both of these,
you can see a lot more mainstream deployment of AI and the benefits of it.
GitHub Copilot perhaps is a good example of it.
And that's what I think a lot of people outside of the developer community don't necessarily get,
is that there's this AI tool that's helping developers to write code.
Jensen Huang, we'll talk a little bit.
He was on stage with you a few minutes ago.
He was talking about that even accelerating the speed at which NVIDIA is able to innovate. What's the breakthrough there?
Yeah, I mean, for me even, my own confidence about this generation of AI being different is when I
first started seeing with, I think, GPT-3 and 3.5 GitHub Copilot, because that was the first
product. In fact, before ChatGPT, we built GitHub Copilot and deployed these models.
And the fact that developers now can just do code completions and get, you know,
and even the, you know, one of the things we've done is we've taken the joy out of some of the software development.
You know, we bring back the joy, the flow to stay in it.
And it's showing productivity data, which is unlike anything we've seen in the joy, the flow to stay in it. And it's showing productivity data,
which is unlike anything we've seen in the past, right?
You're taking the most knowledge work task,
which is software development,
and seeing 50 plus percent improvement.
And so that's what we're trying to replicate
with Copilot for the broad knowledge work
and information, I mean, frontline work.
And in fact, what Jensen is saying,
they're deploying both GitHub Copilot for their developers,
but deploying Microsoft Copilot for all the employees at NVIDIA. So he's
saying, watch out. Now, NVIDIA, if you think is fast now, let's see what happens in a year from
now. So there's this scramble happening right now across enterprise software. So many companies
I'm talking to are trying to add AI into their portfolios, enhance their existing product offerings with it,
and then see kind of how much their customers
are going to be willing to pay for that added boost
and perhaps productivity.
You've been early on this and have some data
showing how Microsoft customers feel about, you know,
this AI being built into your software.
What are you finding so far?
Yeah, it's very, very, very promising. I mean, obviously, the developer one is the one where
we have conclusive, I would say, data. And it went from sort of, oh, well, this is a good idea
to mainstream, just like that, because of the obvious benefits, both individually and for
organizations. I do believe firm level performance,
you'll start seeing divergence if you are adopting
or not adopting some of this technology.
The next place I think is even things like customer service.
We ourselves deployed our co-pilot for customer service,
in fact, for Azure support.
It turns out when you're a customer service agent,
by the time you're trying to solve a problem,
it's already a very hard problem to solve because the automatic bot didn't solve it. It's been escalated to you. So
you have an irate customer on one end and a tough problem. So co-pilot helping you there is fantastic.
And the idea being that the AI can go into the database of the company, figure out when did they
call before, what were their problems. Correct. Or the knowledge basis and bring the sort of the
solution to you, so to speak, versus you going foraging around it. Correct, or the knowledge basis and bring the sort of the solution to you, so to speak,
versus you going foraging around it.
But here is the interesting thing.
We had not realized that it's not just that that was hard,
but it is also the pain every customer service agent had
of summarizing everything they did to solve the problem
at the end of the call, which took like half hour
with all the artifacts, the logs, and what have you,
and all that's automated, right?
So that's real productivity gain.
So we're seeing massive throughput.
Same thing is happening in sales.
Same thing is happening in finance.
So broad strokes, I think in this conference, we are launching all the data we have already
with the co-pilot.
It's early days.
But we're very, very optimistic that this is probably the thing that we've been looking.
In fact, the last time information technology showed up
for real in productivity data was when PCs became mainstream
in the late 90s and early 2000s,
because work and workflow changed.
I hope that that's the same thing
that gets replicated in this AI era.
Yeah, it's a generation ago.
How long do you think before the data is conclusive enough
that you'll know on the demand side,
the customer benefit side, kind of what the calculation is, and that'll be able to aid your
sales effort. Yeah, it's a great question. In fact, one of the things we're also developing
is a bit of a methodology on how do you go about measuring, because it's kind of one of the things,
right? What's the productivity measures here? By cohort, can you think about some evals, some tasks, and really look at, deploy the software, look at and follow the cohort in a month, in three months, look at your own data.
And that's one of the other things that we're realizing is it's like every business is different.
Every workflow is different.
Every business process is different.
And it's also different in time.
And so that's why even having these customization tools, so we're very excited about the Copilot Studio, because you need to be able to tailor these experiences
for your specific business process needs. And so I think all of these will add up. And I'm hoping
that in 24, I think of calendar year 24 is the year where we will have, I'll call it classic
commercial enterprise deployment and deployment data for all of this.
Well, I wanted to start there because that's sort of the top line, right?
Customer demand, what are the problems that it's solving?
But I also want to talk about the bottom line and cost.
And that's where some of your chip announcements come in.
You talked about Azure Maya, Azure Cobalt.
Start with Maya, AI Accelerator, ARM-based.
This is not competing with NVIDIA necessarily,
or Jensen wouldn't have been on stage with you.
But starting with, you said, Microsoft's own workloads,
the software that Microsoft is offering out in the cloud,
this is going to help run that more efficiently.
What kind of savings, what kind of efficiency is possible, do you think,
with your own designed chip versus what you could get off the shelf? Yeah, I mean, the thing, John,
that we are seeing is as a hyperscaler, you see the workload and you optimize the workload. That's sort of what one does as a hyperscaler. A hyperscaler, meaning it's you, it's Amazon,
it's Google, you're the cloud.
And you've got billions and billions of dollars spent on these data centers. That's right.
I mean, so you're a systems company at this point.
I mean, everything from sort of how we source our power to how we think about the data center design, right?
The data center is the computer, the cooling in it.
Everything is all optimized for a workload.
So the fact is we are now what?
We saw these training workloads and inference workloads, quite frankly, first, right?
We have three, four-year advantage of trying to sort of learn everything about this workload.
That's kind of, to me, in a systems business, you have to be early to the next big workload
that's going to take over, so to speak.
And that's what we got right.
And so we've been hard at work on it.
The other thing is we also,
when I think about, you're talking about AI,
for us, OpenAI's models
are the ones that are deployed at scale.
Both, obviously, those are the models
that we are training at scale
and deploying for inference at scale.
It's not just a bunch of models,
but it's this one model.
So we now have a great roadmap
for how we think about Maya,
how we think about AMD, how we think about NVIDIA, all in our fleet. Right now, as we speak,
we have some of the Maya stuff powering GitHub Copilot, for example. So you will see us deploy
our own accelerators and also take advantage. I mean, the other announcement today was the AMD
announcement. We are going to introduce MI300 into the fleet.
It's got some fantastic memory characteristics and memory bandwidth characteristics, which I think are going to make.
And GPD4 is already running on it.
So we are excited about, obviously, our cooperation and partnership, which is deep with NVIDIA, AMD, and our own.
Custom chips are the new black, right?
AWS has Infrared Shed and Tranium.
Google has its TPUs. What does it take to make yours better and get more benefit out of your systems working?
Yeah. So I think the way I look at it and say is you don't enter the silicon business just to be
in the silicon business. I think of the silicon business as a means to an end, which is ultimately
delivering a differentiated workload. So for example, that's why I don't even think of the silicon itself. I think about the cooling system.
I don't know if you caught that. What we did was we built an entire rack, which is liquid cooled
for Maya and everything. The thermal distribution of that entire rack is very different from a
traditional rack. Guess what? We built it because we can then deploy these in data centers we
already have, as opposed to saying, let's wait for the next big data center design, which is fully liquid
cooled, which, by the way, is also coming. So that's the level. When I think about the
advantages we will get, it's not just going to be about one sort of silicon, but it's going to be
the entirety of its system optimized for high-scale workloads that are deployed broadly,
like something like open AI
inferencing. Now, let's take a global perspective. Not long after we're done talking here, you're
getting on a plane going to San Francisco. Chinese President Xi is there. He would like access to all
of these innovations that Microsoft has been talking about, that NVIDIA has been talking about, President Biden says, no. What should happen from here that both allows trade to take place and protects
intellectual property? I think that's a great question. I mean, at the end of the day,
nation states are the ones who define their policies. I mean, it's clear that the United
States has a particular sort of policy decisions that they're making on what it means to both have trade and competition
and national security.
And so as the states decide,
and in this case, obviously,
we are subject to what the USG decides,
we will sort of really be compliant with it.
And at the same time,
we do have a global supply chain.
The reality of tech as an industry today is it's globalized.
And the question is, how does it sort of reconfigure as all of these new policies and trade restrictions all just play out?
Whereas at least for now, today, the majority of our business is in the United States and in Europe and in the rest of Asia.
And so we don't see this as a major, major issue for us, quite frankly,
other than any disruption to supply chains.
The AI piece.
That's right.
That separation.
That's right.
Because most of our businesses, in fact, a lot of the Chinese multinationals
operating outside of China are our bigger AI customers, perhaps.
But China is not a market that we are focused on per se as domestically.
We are mostly focused on the global market ex-China.
For the customers, though, who have to operate in all of these different regions, all these different fields, as a hyperscaler, you've been building out data centers in those places so that you can abide by the rules. Does this friction make it more complicated?
Or in a way, does it benefit Microsoft's more diverse
global footprint that you have more options
to serve customers as these conflicts arise?
Yeah, I mean, I think I've always sort of felt
that in order to be a global provider
of something as critical as compute,
you just need to localize. That's why we have more
data center regions than anybody else. I always felt that data sovereignty, the legitimate reasons
for why any country would want it for their public sector, critical industry was always going to be
true. Also, let's not forget the speed of light issues. You need to have a global footprint in
order to be able to serve everyone in the world. And so, yes, I think at this point, having invested, having gotten here
and now gotten ahead on AI, it's going to work to our competitive advantage. But I think that this
is also about the maturity that one needs in order to deal with the world as is, as opposed to,
it's not like we're building one consumer service that's reaching, you know, two, three billion people. This is about reaching every enterprise public sector workload in the world with all of the compliance and security needs.
And that's a very different business than just having one hit consumer service.
Now, it's been about 25 years since Microsoft lost a big case versus the government, where it looked to some like Microsoft was
about to get smaller.
And yet we're talking here Microsoft just won a big legal case, where you're getting
bigger with the addition of Activision Blizzard.
So I guess, in a way, congratulations.
But also, there's some work in the AI context here.
And now to integrate this, particularly in AI, and you talked about this a little bit on stage. What's the challenge of integrating this into Microsoft, into Azure, into your status as a hyperscaler in a way that you get the full benefit of all of that content of the gaming developer community and of AI? Yeah, I mean, to us, at the end of the day,
you know, when I think about AI, it's not about just this as another technology you add on the
side. It's about sort of changing the nature of every software category, right? Whether it's in
gaming, whether it is core Azure or Windows, all redefining every software category where AI is going to be core to what we do and the value
props we develop. The other important consideration is also to not think of safety as something that
we do later, but to really shift left and really build it in into the core. Like, for example,
when we think about co-pilot, we built all the responsible AI and guardrails right at the very beginning
so that when people are deploying the co-pilot, they know that they have the best safety around
the co-pilot built in. And so these are the things that we are going to do up and down the stack.
And that's why I walked up today, you know, from infrastructure to data to co-pilots,
we're thinking of AI as the main thing with safety
as opposed to one more thing.
With the stock recently at all-time highs,
Satya Nadella, CEO of Microsoft,
thanks for joining us here on CNBC.
Thank you so much, John.