In Good Company with Nicolai Tangen - Vice Chair and President of Microsoft: AI and Geopolitics
Episode Date: August 9, 2023How should we regulate AI? How will AI impact the power balance between the US and China? And how does Microsoft navigate this complex landscape?In this episode, Brad Smith Vice Chair and President of... Microsoft, shares his unique insights on these questions and more. We are also joined by Ulf Sverdrup, the leader of the Norwegian Institute of International Affairs, who will be offering his commentary. Ulf is a world-leading expert in international politics, making him the perfect guest for a discussion on the intersection of AI and geopolitics. We will be releasing this episode in collaboration with NUPI PODCAST: The world stage | NUPIThis episode was produced by PLAN-B’s Nikolai Ovenberg and Niklas Figenschaug Johansen. Background research was conducted by Sigurd Brekke, with input from portfolio manager Richard Green.Links:Watch the episode on YouTube: Norges Bank Investment Management - YouTubeWant to learn more about the fund? The fund | Norges Bank Investment Management (nbim.no)Follow Nicolai Tangen on LinkedIn: Nicolai Tangen | LinkedInFollow NBIM on LinkedIn: Norges Bank Investment Management: Administrator for bedriftsside | LinkedInFollow NBIM on Instagram: Explore Norges Bank Investment Management on Instagram Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
Very welcome. I'm really happy I'm here with Brad Smith. Brad is the president of Microsoft.
And Microsoft is the second biggest holding in the fund, and each Norwegian own roughly 40,000 kroner worth.
So it's pretty amazing.
And I also have Ulf Sverdrup here. He's the director of NUPI, the Norwegian Institute of Foreign Affairs.
And today we are actually going to talk about artificial intelligence.
So really pleased to have you here.
Well, thank you. It's a real pleasure. It's a privilege for me to be here with you and
to say thank you to everyone in Norway who owns a piece of Microsoft.
Now, Mark Andreessen recently wrote that we should not regulate AI.
We should stuff it into everything we have in order to beat the Chinese.
And that was under the headline, we win, they lose.
Now, why is that not the correct thing to do?
Or is it?
Well, I would say, I think it's good to put AI to use. It's good to put AI to good use,
and it's good to have guardrails that, in effect, will keep AI on the road and keep it under human
control. The opportunity to use it for good is enormous to improve healthcare, to find new cures
for cancer, to develop new drugs, to improve education for every student,
to improve productivity. So I think the potential is enormous. But as we've said in the past,
every tool can become a weapon. Almost every tool is unfortunately turned by somebody into a weapon.
It will take not just responsible companies, but law and regulation to manage this properly. But how can one regulate something which is integrated and ingrained into weapons, medicine,
autonomous driving, just everything we do?
How can you regulate one single piece of it?
I think you're hitting on an important point, which is that AI is not one thing.
It's some different things. And we should recognize that as we build a regulatory architecture
that matches the technology architecture of AI itself.
And so what does that mean?
Well, it means that if, say, you're a bank and you have an application that's using AI,
it will be your responsibility to ensure that you still comply with all the banking laws.
And people in the bank will need to be trained.
The banking regulators will be trained in order to ensure that.
We'll look at these new models like GPT-4 from OpenAI.
That's where we at Microsoft have, in part, deployed the technology so OpenAI could train that.
And let's just say there'll be certain safety standards.
I think that there'll be safety practices and processes
that need to be followed when models are built.
There will need to be some kind of safety certification
before the model is, say, released.
It may even need to be licensed.
The data centers where AI is deployed will need to be protected
from, say, a physical and cybersecurity perspective. Those are just three examples. What it means is you break this down into pieces
and you focus on each piece in a much more pragmatic way.
But you're a commercial company. You are listed. You have shareholders like us.
You want to make as much money as possible, right? So why do you want to kind of self-regulate?
Well, I think we frankly both want to self-regulate
and we do support a degree of government regulation
because ultimately companies do well in the long term
when people have confidence in what they're buying and using.
And if you lose the public's confidence,
you fundamentally, I think, put your investment and the interest of
your investors at risk. Our goal is to provide a return for the investors here in Norway,
not just for the next week or month or quarter, but for years into the future.
So as we like to say, we need to manage for the decade and the quarter simultaneously.
As we like to say, we need to manage for the decade and the quarter simultaneously.
And the best way to do that is to be responsible.
And I just don't think there's any substitute for that. So one of the big challenges, I think, is that companies, of course, have to be conscious of their responsibility and impose self-imposed regulation and restrictions.
At the same time, they need to be compensated with government regulatory measures.
And those government regulatory measures
cannot be only in one jurisdiction.
You have to be international.
So how do you see that moving forward?
I think this is a hugely important part of the challenge.
I would actually step back and say, look at two things.
First,
wherever one goes in the world today, I find people in government or in the public are saying,
let's not repeat the mistakes we made with social media. And yeah, I think it's fair to ask,
what were those mistakes? And I think to some degree, we all became a little too euphoric.
In the wake of the Arab Spring a decade ago, everyone said social media is going to be the savior of democracy. And then within four or five years, it had been weaponized and
turned against democracy itself. So let's start by being clear-eyed and having a common conversation
and understanding on an international basis about what can go wrong. What are the problems we need
to solve? I think in part, it's about
making sure that AI is used safely under human control, that we protect security and the like,
and have a short but common and critical list of problems. Then second, let's devise specific
solutions, which still have not really come into being with respect to social media,
solutions which still have not really come into being with respect to social media, and let's pursue them on a coordinated international basis. This will not be easy, but if we're going to make
the most of the AI moment, I think it's an essential step we need to find a way to take.
But how did this need for coordination drive your view of how the world is looking these days,
where we have more geopolitical rivalry, more competition,
frankly say, very little attention and interest in cooperation?
I think one can always get up in the morning and look at the world's problems
and either focus on what we cannot get done or what we can get done and how we might do
that. And for me, that starts in part by looking at what I'll call the like-minded countries.
Where are there countries that have shared values? Fundamentally, in large part, we're talking about
the world's democracies. Where do they have shared interests?
And let's build shared norms. And there will be countries that will not want to join. There will
be countries that will even resist and oppose this. But let's start by building a common
foundation. And when we have that foundation in place, we also have a greater capacity to then talk with other parts of the world that may look at this differently.
And what could such a body look like?
I mean, who would be involved at what level?
What would it look like?
In the world today, there are a few different groups that can come together and really act on an international basis.
Certainly in Europe, people naturally look to the European Union, and that's a critical
part of this.
But I think in these issues, people are increasingly looking at, say, the G7, the G7 plus, say,
India and Indonesia.
The European Union and the US have said together they want to pursue a voluntary code for AI
with that group of countries.
a voluntary code for AI with that group of countries. We see things move sort of to the G20,
but it's really the G20 minus China and Russia, so it's sort of the G18. But you see different things come together. I think there's another dimension, though, that is very important.
What we're fundamentally talking about is multilateral diplomacy and action. In the 21st century,
I think multilateral action usually requires and involves multi-stakeholder coalitions.
And they typically focus on doing one or two things well. So the Paris call for trust and
security in cyberspace was really invented, as the name
suggests, in Paris under President Macron's leadership in 2018.
Microsoft participated in that.
The Christchurch call was really led by Prime Minister Jacinda Ardern, with the help of
tech companies, including our own, in the wake of the Christchurch massacre.
And fundamentally, what people have done is said, let's bring like-minded governments
and companies and civil society together.
Let's protect elections.
That's part of the Paris call.
Let's ensure that terrorists don't engage in mass shootings the way they did in Christchurch,
New Zealand, and stream it over the internet.
And I think if we focus, narrow a bit
the problem we want to solve, take it one at a time, and build these kinds of coalitions,
one can actually move a lot faster. And that's a good thing.
Ulf, what do you think about what we are seeing out of the EU so far in terms of the AI Act?
So the EU, European Union, is a big market
and they have regulatory ambitions
and they also have geopolitical ambitions.
So they don't want to just copy laws and regulations
or policies coming out of the US.
So the EU is about to establish its own AI Act.
It's passed in the parliament
or still some time to work, some work to be done in
order to complete it.
But they plan to put it into place by 2026.
And I think they have a fairly sound approach to it with the risk assessment, something
they consider prohibited, something that they consider high risk, and then you introduce some thresholds for activities there,
and particularly related to transparency, sandboxes, et cetera.
But the fundamental question is, of course, can Europe regulate this alone?
How can they cooperate with the US in the regulatory space?
And perhaps also the competitiveness agenda.
Will European businesses be able to compete under the European framework or will they
move elsewhere?
So there are lots of difficulties in regulating.
So Brad, how much mess can we make in Europe?
How many rules and how much regulations can we make before the Americans just think, you
know, let's not go there, let's not be bothered?
The good news for the European Union is that the
market is so large that it's difficult for any company, in my view, to aspire to global importance
without being present in Europe. And so that is a strength. Like all strengths, one can put it at
risk if one goes too far in a different area.
I would generally agree, though, that the AI Act has been progressing in a thoughtful way.
I was certainly one person among many who saw the first draft and said, oh, my goodness, this is going to regulate Europe out of AI use, and that would be bad for Europe and
the competitiveness of European
industry and commerce. But I think, as is often the case in any sound legislative process,
people use time to educate themselves, to get smarter, to compare views. And there's still
more of that work ahead. The parliament and the council and the commission will come together.
And I'm optimistic that the European Union will end up with what I'll call a forward-leaning,
innovative, but balanced and appropriate framework. And if it does that, I think it will find it much
easier to then coordinate with, say, the United States, with Canada, with Japan.
It takes us back to the G7.
The G7 actually is a good group because it does bring together enough countries outside Europe.
And then when you add, say, India and Indonesia, even more so,
that it sort of helps, I think, countries find more common ground on a global basis.
Now, where are we in the AI race if we start to look at the U.S. versus China?
I mean, I appreciate we have different kind of areas of AI, but just how does the competitive landscape look?
I would put it into two categories.
The first is the development of AI.
And I think the U.S. is leading when it comes to the development of large-scale, say, large-language models like GPT-4.
And you have every day this partnership with OpenAI and Microsoft competing with, say, Google and its acquisition of DeepMind. have a company like Anthropic, three different groupings really advancing what you would think
of as frontier models for AI, which as the name suggests, are AI models at the frontier of
technology. We're months, not years ahead of China in this race. And in other areas, I think China
is actually ahead. Narrow, special purpose AI uses like vision, where they just have
so much data that they collect and AI they develop based on identifying people's faces and
computing algorithms on top of that. That's the race to develop AI. I think increasingly,
what we'll also see is the race to deploy it.
One should never underestimate for a moment how quickly China moves as an economy in deploying new technology. I think when you get into that context, you have China, you have Asia,
where people are embracing it very quickly. You have North America, and then you have Europe.
And I think part of the conversation in Europe,
hopefully, will focus on both of these dimensions.
It's not just about building it.
It's about getting the benefits from it.
So you mentioned language models.
You mentioned facial recognition.
What are the other kind of verticals which you are monitoring?
Well, you certainly see specific silos, if you will,
in different areas. You know, vision is one. You know, you definitely see a vertical around,
call it biometric data, around genomic data, work that very much turns on getting access to a large sample of human data.
And there you see a country like China in a strong position.
You actually see a country like the United Arab Emirates in a very strong leadership position.
Basically, what I would say more broadly is if you think about something like ChatGPT or GPT-4 and say, that can do 1,000 things well.
Well, somebody can try to build 1,000 different models that each does one thing well.
Obviously, no one's going to build all of those, but it's not as computationally intensive.
It's not as expensive.
It's probably possible to go faster.
And so to some degree, there will be a race between different technology models in terms
of how much each model is designed to do. When you talk about this geopolitical rivalry in China,
do you think that in the future, Microsoft will be a company that will serve the G18 or G7 or a plus plus rather than a global
company. What is interesting about Microsoft today is we are a global company, but we don't
play the same role in every country. One of the things I'm always interested in each year is the
Economist publication from the Economist Intelligence Unit, where it lists the countries
of the world and it ranks who's a democracy and who's not. And roughly 95% of Microsoft's
revenue comes from the democracies of the world, and 5% comes from the other half.
And what that reflects, I think, is the broader and I'll even say more special role that we
play in the world's democracies, say in Europe, in NATO.
We fundamentally are the providers of the digital infrastructure on which the economies
grow.
And we play a role in promoting and protecting and even defending the democracies
of the world. And you see that in a very pronounced, even dramatic way in Ukraine today.
That doesn't mean we play no role in other places, but it is a more limited role. We're
present in China today, even though it's only about one and a half percent of Microsoft's revenue.
And in part, we're there to serve multinational companies so that they can use our technology today, even though it's only about 1.5% of Microsoft's revenue.
And in part, we're there to serve multinational companies so that they can use our technology in China the same way they do in other places.
In part, we're there, in my view, to advance common global goals like the reduction of
carbon emissions and using technology to improve the sustainability of the planet.
And there are certain things that we avoid doing.
We protect against what we would regard as human rights abuses
so that our technology is not used for that purpose.
For obvious reasons, we don't provide sensitive technologies broadly,
and we have special controls in place, for example, so that the Chinese
military is not using it. And so what one recognizes, I think, is fundamentally, rather than
a world where we're present or absent in a binary way, we're present in a very ubiquitous way in
part of the world, we're present in a more limited way in other places.
And then you get to a country like Russia where we're basically not present at all.
So the world is complicated, and we have to manage that complexity.
And what kind of expectations do you think we as large shareholders should have to the
companies when it comes to their use of AI?
as large shareholders should have to the companies when it comes to their use of AI?
I think that fundamentally, what I hope our investors will look for in a company like Microsoft is what I shared, for example, with our own employees, the 758 employees we have in
Norway. I think our goal is to become and remain the most trusted company for what has become the most important technology
in the world. The key is to innovate so that we're always at the frontier of technology,
but to do it in a way that sustains people's trust. First and foremost, the trust of our
customers so they know they are using a product that can be deployed safely and
securely and the like.
But fundamentally, I just think in some ways, a tech company is not entirely different from
a bank.
First of all, in the world today, if you want to get something done that's truly important,
there's probably two things you need.
The first
is money, and the second is technology. But second, like a bank, we need to be an institution
that people trust, not just for tomorrow, but for years and decades into the future.
When we look at AI and how companies use it, what are the type of things we should be looking at,
I think, along the lines of transparency and so on?
But what are the factors that you would split it into?
Well, the very first thing I would look at if I'm a company is,
is this technology going to make me better as a business
or a government, for example, or an NGO?
So I would start with that.
And fundamentally, what I would say in that context
is our vision is encapsulated
in the word that we apply.
We call this a co-pilot.
It is not designed to replace the human need to think.
It is designed to help people think better, faster, more efficiently, and to advance productivity
in the central line of business of a company and in every supporting function, sales, marketing,
advertising, finance, legal,
human resources.
And I think it's up to us to show that you can drive a better business with this technology
than without it.
Your people will be more productive.
What we're finding is that people who use this to write software code are not only more
productive, they're actually happier employees because it eliminates a lot of drudgery.
That's the first thing.
But second, I think one does need to look at the risks it can create and have confidence
that first we've put in place the practices to manage those, to measure them, to mitigate
or reduce them, to address, say, privacy risks or security risks.
And then we share that so that when you deploy it, you're able to do so with confidence.
I think the word transparency is a critical one, as you referred to it.
One of the challenges with this technology is, in truth, we're still getting better at
figuring out how to
make it explainable to people. I often have people come up to me and say,
why did ChatGPT give me that answer? I do think you're going to see ongoing advances
in what is really referred to as the explainability of AI, and I think you're right to use the word auditing.
We will see the development of standards for what we think of as responsible AI,
all of these factors that people are concerned about. And then once the standards are created, you'll see auditing against that,
both internally inside a company like Microsoft,
but you'll see third-party audits, even government audits,
I suspect, emerge in a short number of years.
Brandt, you said Microsoft is working to support democracies also.
But at the same time, some critics of AI would say that AI has the potential to undermine democracy, spreading fake news, breaching copyright rules, etc.
So what would be your response to that kind of criticism?
It's really twofold.
First, I think we should be clear-eyed.
We should recognize that people say, especially foreign governments, take the Russian government,
which operates a global, at-scale network to generate information designed to mislead the public, and it puts it
out in 23 languages. We monitor that. And for better or really, in this case, for worse,
unfortunately, we should assume that bad actors will use AI to do what they do more effectively, to generate content more quickly at less cost.
They'll likely use it to try to target the dissemination of, call it, disinformation
in that kind of way.
That's the bad news.
I would say, though, I believe there's good news that is going to outweigh the bad news
because we have the ability to harness the power of AI to better
defend against these threats. That to me is the fundamental lesson of the war in Ukraine where
defense has proven to be stronger than offense. And I am optimistic about the power of AI from
a defensive perspective in part because we have so much data. Second, we have the world's best experts
to figure out how to harness the power of that data. And those experts can now use AI as a game
changer because the biggest challenge in having so much data is the ability to sift through it
and detect patterns and then develop defenses once you identify the threats.
AI is a game changer for helping smart human beings make use of more data. So I'm optimistic,
and I would say more than that, I'm determined. I think we at Microsoft are determined. It's
really an imperative for us. We have to use the power of AI to improve the
defenses of democracy better and faster than the adversaries of democracy might use to turn this
into a weapon against it. In your book, Tools and Weapons, you talk about how these things can
influence elections. Do you think it will influence the next presidential election in the US?
I think we should assume
that there are foreign adversaries
that will seek to use digital technology,
including AI,
to try to impact the next presidential election
in the United States,
potentially the next parliamentary election
in the United Kingdom,
potentially the whole round and range of parliamentary elections for, say in the United Kingdom, potentially the whole round
and range of parliamentary elections for, say, the European Parliament. We have to be prepared.
The most naive thing we could do, I think, is assume that everything will go fine and we don't
need to have better defenses. So we're focused in 2023 on how we strengthen those defenses for 2024, how we work with
candidates and campaigns and political parties and democratic institutions, and how we might
find some new collaborative efforts that would bring together, say, governments and civil
society and tech companies to defend against some of the new threats, say, the use of AI
to create deep fakes.
And so there's a lot of work that's starting now.
It has to go quickly.
I think from my perspective, we have a deadline.
It's called the 1st of January 2024.
We have to have strong and in some cases better defenses in place to address these risks.
And to which extent does AI make cyber even more of a threat?
Well, I think AI can be used to try to develop more potent, call it cyber weapons. It can be
used to better target the delivery of those cyber weapons against specific victims. But here again,
those cyber weapons against specific victims. But here again, I am very optimistic that it will strengthen defenses faster than offenses. Fundamentally, because what we've seen, say,
over the last five or six years is two really important advances in cybersecurity defensive
technology. The first is threat detection.
You have to detect an incoming attack.
And the second is the ability to defend against it, which is fundamentally based on the ability
to detect it, code, if you will, a virus to protect a device against it.
And today, in a way that was not the case five or six years ago, if you work for a government
or a company, in all probability, you have to enroll your device, your laptop, your phone,
so that it is the beneficiary of what we call endpoint protection.
Fundamentally, what this means is when we at Microsoft identify a new cybersecurity
attack, we detect the threat, we are able to code the virus or protection, say the vaccine,
to stop it. And then we're able to distribute that code, that vaccine, to every device around
the world in somewhere between minutes to a couple of hours. And if we can now use AI to
detect threats faster, to create the code to vaccinate devices faster, I think we can stay
ahead of these offensive attacks.
And that really needs to be our goal.
How would you see the role of the company?
So in this world, technology companies are getting really big.
Microsoft, your market cap is bigger than the GDP of Korea. So how do you see
your role in shaping politics as a global actor yourself in this new geopolitics?
It's a question that people often ask about. And I believe in a world where no one is above the law.
And that means no person's above the law, no government's
above the law, no company, no tech company, no product is above the law. So what that fundamentally
means is you need law. And then as a company, you have to respect and abide by the law of each
country in which you operate. That, I think, is a first principle that fundamentally has served humanity
extraordinarily well going back to, I'll say, the times of ancient Greece or even earlier.
The real challenge is how do you make this work in a world where technology is global and moving
so quickly and governments are confined by territory. And I think what it means is that we have a huge
responsibility, first, to provide information so governments know where the technology is going.
Second, to collaborate, especially in areas like the protection of cybersecurity, especially in an
era where you have a shooting war, a real physical war in a country like Ukraine.
I think it means that we all have
to find new ways to collaborate across borders. If every country goes it alone, if every government
tries to operate with complete independence and with disregard for the needs of others,
if in a worst-case situation governments try to regulate technology beyond their own
borders and start bumping into each other, we risk creating not just a cacophony but
an incredible confusion.
We're going to need to innovate.
We need our governments to innovate, and it does take us back to the need for multilateral
and multi-stakeholder diplomacy.
need for multilateral and multi-stakeholder diplomacy. And it changes the nature of a company like Microsoft in terms of how we need to engage. I always think from a starting point
of humility, let's start by recognizing everything we're not and everything we don't know,
but let's do what we can to share information and contribute to real solutions.
Brad, you talk about standards against which we can audit.
What type of standards are we talking about?
I think we will need a new age and era for AI safety standards.
Fundamentally, that focus on the risks of what can go wrong with AI and protect against them.
And the good news is we're seeing some of this emerge already, whether it's in the context
of voluntary international standards or in the United States, there's the National Institute
of Standards and Technology, NIST.
They've created an AI risk management framework that I think has a lot of promise.
And it builds on a decade and more of work by NIST in the cybersecurity space.
So companies, broadly speaking, are familiar with it, not only in the United States, but
more broadly.
So let's let the standards organizations develop the standards.
Let's encourage them to do it in what I would call a non-political way, grounded in a lot of interaction with the
people who create technology, but fundamentally as their own decision makers. Once we have those
standards, let's encourage companies to implement them. One of the things we've encouraged the
United States is that the government consider an executive order that in effect would create
incentives, where the federal government would say, procure AI for certain uses only from companies who are
self-certifying that they're implementing the NIST AI risk management framework.
Let's ultimately find a way to combine standards with law and regulation. I'll give you an example
that has spoken to me, something we take for
granted every day. You get on a bus, it has an emergency brake. And when I have studied this,
what's been interesting to me is the laws that require buses to have emergency brakes are
typically implemented on a national or even at a municipal level. But they work because there has been a common standard created for emergency brakes for
buses.
We're going to need the same type of thing, a standard for, think of it as the safety
brakes for AI that might be used to manage the electrical grid.
If you have a common standard, then local or national governments can create laws
and rules that build upon it.
So the two things really go together in tandem.
Well, somebody compared AI, like being on Apollo 11 and on our way out to space.
And it's hugely exciting, but also a tiny bit scary.
It's for sure been a real honor to have you here, Brad,
and also a big thanks to Ullis Verdripp, who's helped out.
And all the best of luck going forward, and good luck.
Well, thank you.
It's probably a perfect note to end upon
because we should remember that with Apollo 11,
the goal was not just to send a man to the moon.
It was to bring him back safely to Earth.
We need to keep that same balance in mind as we look to the future of AI.
Absolutely. Good luck with that.
Thank you.