Angry Planet - China’s Stealing Our AI, but May Not Have to for Long
Episode Date: June 26, 2018Artificial intelligence poses a number of different threats. It can make existing weapons more sophisticated and dangerous, it can help develop new weapons entirely and it can easily be used to create... the ultimate surveillance state.All of that is happening particularly quickly in China, which has stated ambitions to lead the world in AI in the near future. Security expert Elsa Kania joins us to explain what’s going on.You can listen to War College on iTunes, Stitcher, Google Play or follow our RSS directly. Our website is warcollege.co. You can reach us on our Facebook page: https://www.facebook.com/warcollegepodcast/; and on Twitter: @War_College.Support this show http://supporter.acast.com/warcollege. Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
Love this podcast?
Support this show through the ACAST supporter feature.
It's up to you how much you give, and there's no regular commitment.
Just click the link in the show description to support now.
China has clearly articulated ambitions to, quote-unquote, lead the world in artificial intelligence
and become the world's premier AI Innovation Center by 2030.
You're listening to War College, a weekly podcast that brings you the stories from behind.
the front lines. Here are your hosts, Matthew Galt and Jason Fields.
Hello and welcome to War College. I'm Jason Fields. And I'm Matthew Galt.
Artificial intelligence is one of tech's favorite buzzwords at the moment. It's supposedly
being built into everything from your phone to your dishwasher. And of course, the military is
all over it. But what does it look like in reality and how scared should we be? To help us answer
these questions, we've asked Elsa cania to join us. Elsa is an adjunct fellow focusing on technology
and national security at the Center for New American Security. So thank you very much for joining us.
Thank you. It's great to be here. I always like to start off with the basics, and I think you can't get
much more basic than what do we mean when we say artificial intelligence. So that actually isn't
a basic question, unfortunately. There can be a lot of debaesion.
debate or a lot of conflicting definitions over what AI is and means.
And the joke, I suppose, is that the bar for what constitutes AI has always moved up whenever
it starts to work and be more integrated into our lives or devices.
But at a basic level, you could define artificial intelligence as the use of algorithms
to learn based on data or sort of the notion of machines having intelligence.
and certainly from the 1950s to now,
the understanding of what that is and means
or could imply going forward has evolved considerably,
but certainly that a lot of what we're seeing today
that is characterized as artificial intelligence
is machine learning, particularly deep learning,
which essentially involves the use of algorithms to learn
based on often massive amounts of data.
And that can be a very powerful tool in many respects,
but also one that can be quite limited in terms of the fact that the data used to train these algorithms,
if it's skewed or unbalanced in certain respects, can impact their results.
And at least for now, as many experts have pointed out, despite and beyond the height,
AI is still very limited, given this reliance upon the availability of massive amounts of data,
sort of a degree of brittleness or inability to adapt to unexpected circumstances.
And in addition to the number of vulnerabilities we've seen in terms of the potential,
for algorithms to be spoofed or otherwise manipulated.
So certainly AI is a very powerful, very powerful concept, a very powerful technology and
one that means many different things.
And whether you're talking about particular types of algorithms in use or some of the
specific applications in play, it could really transform or enhance everything from the
products and services we use in our data, day to day.
lives to enhancing economic growth, to having tremendous potential in a military context as well.
So not a basic answer there, and I could go on, but I'll stop there for now.
Well, what are some of the military applications of all of that?
So certainly, if AI is indeed like electricity and could electrify, cognify, or intelligentize
just about everything, then the military applications are quite far-reaching and ranging from those
that are more near-term and more incremental, perhaps, in their impact to those that could
arise in the longer term and be more transformative. So certainly, a lot of what has arisen so far
has been the use of techniques and technologies like machine learning and computer vision
to enhance intelligence surveillance and reconnaissance capabilities. So, for instance,
what the Department of Defense is pursuing with Project Maven. And I think certainly also more
incremental has been the focus on introducing higher levels of autonomy into unmanned, rather
uninhabited systems, everything from more autonomous drones to the notion of swarm intelligence
or the ability to construct a swarm of hundreds, even thousands of units that could be quite impactful
going forward. I think beyond that, the number of supporting functions in a military context in
which AI could be useful, such as logistics, predictive maintenance, and otherwise, even a lot of
the basic management functions that can be cumbersome and unwieldy in certain cases. And I think,
looking forward, I think, I mean, certainly an early frontier could very well be information
operations from greater use of automation and cyber warfare to notion of cognitive electronic
warfare enabling more sophisticated capabilities within that domain.
I think that's also something we'll see from use of AI and cybersecurity products to
potentially greater automation and cyber operations.
I think we'll start to become more disruptive in these domains in the year to come.
And I think looking forward to the longer term, the notion of AI weapons or lethal autonomous
weapon systems.
And again, we could talk for the whole podcast.
about how those might be defined,
to what extent they are being developed
and what their implications may be,
but the notion of a weapon that can select and choose its own targets
or the notion of a sort of deeper weaponization of AI in that sense
is something that is provoking a lot of the concerns and anxieties
about killer robots or the Terminator.
Certainly that's, you know,
that, though it can be debated whether that sort of autonomous weapon
has existed for decades or,
or whether it may never exist, depending on how you define it and set the threshold there.
I think I'd also add that in looking to the future of warfare,
one of the applications that could be quite impactful is the notion of applying AI to command
decision-making to enable decisions priority in the battlefield, whether in terms of battle management
or enhancing situational awareness in ways that could support and enhance decision-making.
and certainly all of these applications in addition to many more that I have not mentioned or have not imagined or that have not yet emerged as potentially impactful are being pursued by a range of militaries around the world at this point.
In the U.S. to China, to India, to Israel, to many more going forward.
So it sounds like it's possibly going to change everything in ways that we are not even fully aware of yet.
Yes. And I think the only certain.
is that there is a tremendous uncertainty at this point about the trajectory these technologies
may take and the ways in which they may have potential impact or potentially major limitations
when applied to the context of national defense.
And I think a major factor as well in terms of their, in terms of what this means for the
future of military power, the future of warfare is how different military organizations
will decide to adopt and utilize these technologies,
and some may be more open to embracing that sort of disruption.
Others may be more resistant, whether for bureaucratic reasons
or due to legal, ethical, and moral factors.
So certainly there will be a tremendous amount of variability
in terms of what the AI revolution, so to speak,
if it does materialize in the ways in which many are expecting,
in terms of what that will mean for different militaries,
for future military competition.
Can I ask you about a specific scenario?
Sure.
Did you, there was maybe about a month ago,
a report that came out of the Rand Corporation
about the use of AI possibly destabilizing
essentially mutual assured destruction.
Did you happen to see that?
I did.
I think it's a great start to thinking about
the implications of artificial intelligence
for strategic stability,
and I can certainly,
there are real reasons for concern there in many respects.
And certainly one example that I would give in the context of the Chinese military that actually surprised me a bit was a, an account that came out.
And I'll caveat this with this was reported in the South China Morning Post, which is based in Hong Kong and owned by Chinese tech company Alibaba.
And sometimes reports would be characterized as techno propaganda or a little bit a little bit hyped, a little bit.
exaggerating certain cases, but this particular article talked about the use of AI in Chinese
nuclear submarines, and the article did not specify whether that meant nuclear powered or nuclear
armed in this case, and talked about the potential for AI augmented brain power to enable
major advantages in undersea warfare. And then another quotation from a Chinese researcher later
in the article alluded to the potential for essentially the notion of killer AI nuclear submarines
with enough weapons to destroy a continent, or something to that effect.
So I think certainly that sort of account should be taken with some degree of skepticism,
but I think something like the, that something for which there is robust research underway in the Chinese defense industry,
such as the use of convolutional neural networks for acoustic signal processing or at a basic level of the application of machine learning to enhance ISR in submarines,
I think is quite feasible and may be underway.
And I think the question would be sort of as AI in different forms
or as higher degrees of automation start to be integrated either into nuclear platforms
or into some of the supporting ISR capabilities.
What does that mean in terms of some of the risks and threats to nuclear
and strategic stability going forward?
And I think beyond a clearly nuclear context,
even the potential for the intelligenceization of the future battlefield to use the term that the Chinese military has started to apply to AI-enabled capabilities.
Even that trend, if you have more precision and more power in a conventional weapon that could also be destabilizing to the military balance and to current deterrence.
So I think there's a really a range of issues that may arise with the,
with the development of multiple military applications of AI,
and I think it's an open question of the extent to which different militaries
may be comfortable with applying them in support of their nuclear arsenals.
Russia seems to be considering a more forward-leaning posture in that regard, perhaps,
even with notionally having unmanned platforms to deliver nuclear payloads
or fully automated platforms of the Chinese military may,
for instance, try to leverage AI in different forms to support some of its early warning and
strategic capabilities. But I think certainly this is an issue I think that merits early consideration
given the stakes in play here. Does that, from what you're saying so far, it sounds to me,
and I know this may show a lack of sophistication, but that AI is mostly,
being used to improve existing weapon systems, to make them more accurate or to make human
decision-making more simplified? In other words, offer, you know, a drone would say, yeah, I actually
think that's Osama bin Laden, as opposed to giving less specific information like there are two people
in this compound. I mean, is that kind of what artificial intelligence is really about at this point?
It's not about new weapons. It's about refining the ones that are already.
deployed? Yes, I'd say that at least in the near term, a lot of the capabilities, AI, in various
contexts, promises to deliver will be more enabling and enhancing or supporting existing missions
and operations rather than radically transforming them. So I think certainly we've already seen
a wide range of unmanned systems across all domains of warfare. Adding greater, adding higher levels of
autonomy could certainly increase their capabilities going forward and also enable them to
operate and denied and contested environments, which I think is where a lot of the relevance comes in
that in so far what we've seen primarily with military drones or remotely piloted aircraft has
been that they do, despite the name and man, they do require human involvement, often a considerable
degree of it. And if we're thinking about future great power competition and the potential of
for a heavily contested battle space
in which those sorts of communications may not be survivable,
then having autonomous systems will be a major value added in that case.
I think certainly if you think about moving from precision weapons
to incorporating more AI-like technologies,
or just computer vision, automatic target recognition
that is more sophisticated into cruise missiles, which is something that the Chinese military
is working on, that certainly, again, enhances capabilities and enables more flexibility in
their employment.
So does China in particular, which I know is in a real area of study for you, does China
pose a particular threat?
Are they, and actually the other question I have is because from the way things are reported
here in the United States, we're constantly accusing China of stealing tech rather than developing
its own. In the case of artificial intelligence, is that also a charge that's been leveled?
At this point, to speak of China's rise in artificial intelligence has almost become a cliche,
really. And I think certainly over the past couple of years, this trend has drawn increasing
attention as China has sort of to surpassed the U.S. and by metrics such as the number of patents
or publications.
We've also seen, and this has been a major source of concern in certain respects,
that China has clearly articulated ambitions to, quote, unquote, lead the world in artificial
intelligence and become the world's premier AI Innovation Center by 2030.
So certainly the aspiration is there for China to emerge as a true powerhouse in AI
and to leverage its potential across a range of.
of applications, everything from enhancing, everything from enabling China's economic growth
to accelerate and overcome the middle income trap that would otherwise result in poor,
that could otherwise result in poor economic performance in the years to come to pursuing a
range of applications that support the security of the party state, such as AI and policing,
censorship, surveillance, and also looking to,
and military context for a range of national defense applications.
The U.S. is still clearly in the lead in terms of, at least for now, in terms of cutting-edge
research and next-generation algorithms.
It remains to be seen whether China's future trajectory and AI will really live up to
these ambitions and the potential that it does have based on likely availability of
top talent going forward if these educational initiatives pay off.
the ability to leverage massive amounts of data, including the fact that China may have, by some
estimates, 30% of the world's data by 2030, and also the ability to leverage close relationships
and partnerships between the government and the private sector and between commercial
developments and defense applications. In recent history, some amount of Chinese defense
innovation has clearly been driven by both licit and illicit means of tech transfer,
ranging from targeted acquisitions to outright intellectual property theft, whether through
cyber or through human means.
And I think that increasingly, the Chinese government and military are seeking to progress beyond
that, and not merely to be a fast follower that is trying to rapidly,
adapt and introduce these technologies in order to catch up, but increasingly instead aspiring
to really become a leader and pioneer and to pursue truly original disruptive innovation
and artificial intelligence and other critical frontier technologies such as quantum
communications, computing, and sensing as well.
How advanced do you see this stuff getting? You've written about something called the
Battlefield Singularity before. Can you explain that concept to us?
Sure. So I suppose to start Battlefield Singularity was the title of a report that I released
through CNAS, Center for New American Security, last November. And that was really my
attempt to hopefully create a foundation based on some of the research I've done so far for
understanding the national defense dimension of China's emergence as an AI powerhouse and
looking both at the broader context of Chinese AI plans and also more specifically at how the
Chinese military thinks about artificial intelligence and the future of warfare and some of the
applications under development by different institutions from academia to the Chinese defense
industry and otherwise. So I chose the title battlefield singularity in part because I wanted to
have something a little bit more created than something something dragon with Chinese characteristics.
wanted to hopefully come up with something a little bit catchier that I thought would be evocative of some of the trends I was describing.
And although I think the weather and at what point we might see a real singularity in warfare or something that is far too soon to say and will be continued to be debated in the years to come or even what that would mean if AI reaches a point where it does start to surpass humans in a range of contexts as we've started to see from chess to go.
and beyond. But in certain Chinese writings from, for instance, defense academics or strategist,
I started to see a major focus on the impact of AI on command decision making and even the notion
that as AI becomes more pervasive on the future battlefield and as military operations become more
complex and occur at speeds perhaps beyond what the human mind can keep pace with, we may see
a shift in the character of conflict, so to speak,
at which unmanned systems, robotics,
AI-enabled systems are much more at the forefront,
and humans are not quite so directly involved in decision-making,
and this could be considered and has been characterized
as a singularity of sorts on the battlefield
of point at which the role and involvement of humans
starts to change in fairly fundamental ways.
And one question that is often asked of me is whether or not I think the Chinese military is likely to take humans entirely out of the loop.
And I think there are reasons to be skeptical of whether given aspects of the Chinese military's command culture and organizational dynamics, whether they'd be willing to do so, given a preference for highly centralized control, and unwilling to grant autonomy, so to speak, to lower.
level personnel and a characteristic control of strategic capabilities, whether that be nuclear
space or cyber, directly under their central military commission. So I think there are sort of aspects of
how the Chinese military things and operate that may make it less likely to be comfortable
with having humans less involved in decision making or the delegation of sorts that
that it would involve. But at the same time, I think at least some of the more speculative
future-looking writings I've seen do, again, raise this notion of a singularity, if that's
the right word, so to speak, or really just a point at which AI takes on a much greater role
in decision-making and alongside whether augmenting, enhancing, or perhaps at some point even
replacing commanders. And I think that a lot of the debates in the U.S. and the knee-jerk reactions
against the notion of taking human out of the loop don't seem to be quite as intense or salient
in China. So I think certainly if there were a point at which a major advantage could be gained
through a more automated or autonomous approach to operations, I think the Chinese military
could be less constrained by the very deep ethical and moral concerns that the U.S. military may look to,
or at least that have manifested so far in a lot of the debates we've had in these issues in the U.S.
Well, so this conversation honestly makes me wonder, are we talking about Skynet from the Terminator becoming real?
Do you see that as, is that something we need to worry about?
So there are a number of those who are concerned about killer robots.
You can watch the slaughter bots video, for instance, to see one dystopian vision of how these technologies and capabilities might pan out.
There's also a lot of talk of a terminator scenario.
But certainly, again, the only certainty is uncertainty about how these technologies will develop.
And I think their tremendous limitations in terms of brittleness and vulnerability could also be major impediment to their adoption by militaries going forward.
But I think certainly given how rapidly these technologies are developing and evolving, I would say nothing is impossible.
But personally, I am more concerned about some of the more immediate risks rather than more long-term sci-fi-like scenarios that I think also may merit real.
concern and real consideration at this point, but I think I'd first look to some of the basic
questions that arise with human machine teeming or human factors. For instance, in the U.S.
military's history with highly automated or autonomous systems, such as the Aegis or Patriot,
there have been a number of incidents and accidents that reflect just how difficult it is
to operate systems with that degree of complexity. And even if at face value, having an
autonomous or AI-enabled system may seem to imply that it's easier for humans that you just
sort of sit back and let them operate. I think the reality is that the introduction of these
technologies will place a range of demands on those who are using and working with them and
perhaps require new approaches to training or a high level of technical proficiency.
So I think certainly for we're all militaries looking to adopt and operationalize these
technologies, there will be major challenges to begin with, resulting from that complexity.
The sci-fi-like risks that may arise in the more distant future are, I think, could
materialize someday and are worth perhaps looking ahead to consider now. But I think certainly
prioritizing some of the very real and immediate concerns that we're already starting to see
have an impact around the world, would perhaps be a better place to start.
Okay, thank you very much, Elsa.
We really appreciate your time.
Thank you so much.
Thank you.
Thanks for listening to this week's show.
I want to say thank you because we've been getting some really good feedback from listeners recently.
For example, people especially like Matthew's episode on Occult Ideologies.
We're going to poke around and see how to fit more shows like that into our stream.
You can always get in touch with us in a couple of different ways.
One, review us on iTunes and tell us what you like.
You can also get us on Facebook at facebook.com slash warcollege podcast and tell us what you want to hear more about.
War College is incredibly important to Matthew and I.
We do it because we love it.
So help us get the word out if you love it too.
The show is me, Jason Fields, and Matthew Galt.
We'll be back next week.
