The David Knight Show - INTERVIEW Four Battlegrounds: Power in the Age of Artificial Intelligence
Episode Date: March 9, 2023While the book "Four Battlegrounds: Power in the Age of Artificial Intelligence" focuses on AI in the context of power and competition between US & China, Mr Scharre writes "This book is about the... darker side of AI". It's not the usual concerns about AI becoming sentient and malicious, but AI used maliciously by humans. Paul Scharre, former Army Ranger who served in Iraq & Afghanistan, and author of award winning study of autonomous weapons — "Army of None", VP & Director of Studies at Center for a New American Security. Find out more about the show and where you can watch it at TheDavidKnightShow.com If you would like to support the show and our family please consider subscribing monthly here:SubscribeStar https://www.subscribestar.com/the-david-knight-showOr you can send a donation through Mail: David Knight POB 994 Kodak, TN 37764Zelle: @DavidKnightShow@protonmail.comCash App at: $davidknightshowBTC to: bc1qkuec29hkuye4xse9unh7nptvu3y9qmv24vanh7Money is only what YOU hold: Go to DavidKnight.gold for great deals on physical gold/silverBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-david-knight-show--2653468/support.
Transcript
Discussion (0)
joining us now is um uh paul charre he is uh has a previous book the army of none
about artificial intelligence he is a former army ranger who served in iraq and afghanistan
his book autonomous weapons and the army of, was an award-winning study.
He is vice president director of studies at the Center for New American Security.
And this book, which is a real page-turner for something that is heavy into technology,
but also politics, geopolitics, covers a wide range of areas.
And I've got to say, I really did enjoy it.
It's a massive book, but I did enjoy reading it. The book is Four Battlegrounds, Power in the Age
of Artificial Intelligence. Thank you for joining us, Mr. Shari. Thank you so much for having me.
Really appreciate it. Well, thank you. I want to focus at the very beginning of the book,
and this is one of the things that hooked me. This book is about the darker side of AI.
And that's what I want to focus on. Too often,
we get this Pollyanna vision version of the future, you know, and everything is going to be
just shiny new toys and technology. But the reality is a little bit concerning, isn't it?
I thought it was interesting that you began the book with a talk about an AI dogfight.
And again, there's a lot of great anecdotes through this, which makes it such a
good book to read. Tell people what was happening in DARPA's ACE program, that's Air Combat Evolution.
Yeah, thanks so much. Well, I'm glad you enjoyed that one. I thought it was really exciting to
learn about. I talk at the opening of the book about DARPA's ACE program, Air Combat Evolution,
and the DARPA Alpha Dogfight Challenge.
So the ACE program is designed to create an AI agent that can go into the cockpit to assist
human pilots. And the Alpha Dogfight Challenge that DARPA did a few years ago, taking a page
from AlphaGo that beat the best humans at Go, was designed to beat a human in dogfighting in a
simulator. And there's a lot of caveats that apply from a simulator to the real world.
It's not the same.
Right.
But nevertheless, a big challenge because that's a very difficult environment for humans.
You're maneuvering at high speed, requires quick reflexes, situational awareness,
anticipating where's the other pilot going to go.
Yeah, let me interject here and say, you know, one of the things that surprised me about that
was that because of technology, typically missile technology, right? You don't
have dogfights anymore, but that's really a measure of pilot skill is how they were using
that. So tell us how it went. That's right. Pilot skill. And it's always pilot trust,
pilot trust in the AI, right? If the AI can do dogfighting, then it's going to help pilots trust
it more. So in this competition, a number of different companies brought their AIs.
They competed against each other.
Now, the winner was a previously unheard of company called Heron Systems, beat out Lockheed Martin in the finals.
And then their AI went head-to-head against the human-experienced Air Force pilot, totally crushed the human.
15 to 0, human didn't get a single shot off against the AI.
And the thing that was most interesting to me was the AI was able to make these superhuman
precision shots when the aircraft are racing at each other hundreds of miles an hour, head
to head, that are basically impossible for humans to make.
So the AI actually was not just better than the human, but was fighting differently than the human. Yeah. And as you point out in the thing,
typically we've all seen dog fights in movies over and over again, even in Star Wars,
the whole thing is to maneuver around and get behind the guy and take the shot from behind,
but it operated differently. What did the AI do? So for humans, exactly. They want to maneuver
behind, get into the six o'clock position behind the enemy and then get a shot off.
But there are these split second opportunities when aircraft are circling and they're nose to nose.
And there's just a fraction of a second where you could get a shot off when they're racing at each other head to head.
And the AI system was able to do this.
It's a shot that's basically impossible for humans to make. It's actually banned in training because it's risky for humans to even try because they risk a collision when the aircraft are racing
at each other head to head. But the AI was able to make that shot, avoid a collision.
And the really wild thing is AI learned to do that all on its own. It wasn't programmed to do that.
Oh, really?
It simply learned to do that by flying in a simulator.
Wow. So it's basically playing chicken with the other
plane and then taking a kill shot and getting out of the way and not getting out. That's pretty
amazing. Pretty amazing. Now, of course, you point out in the book that it has complete situational
awareness, okay, which is something that helps it. But later in the book, you talk about poker,
and I thought that was very interesting because for all the years, I haven't been following
all the different game stuff that's been happening you know we had all these competitions
where you had uh computers against chess players and against go players and all the rest of this
stuff but i remember at the time the early days when i was looking at that stuff they were saying
well the real thing would be poker because in poker you don't have uh you don't know the world the entire world
situation don't have a complete uh surveillance of everything that's there and now as of 2017 you
talked about what happened with poker tell people where ai is with poker and how it got to that
uh situation exactly so poker is a really exciting challenge for ai it's difficult because it's
what's called an imperfect information game there is this hidden information that's critical to the game. So in chess, in Go, the AI can see the entire board. You can see all of the pieces and where they are. But for poker, the most important information, your opponent's cards, is hidden from you. And so human players have to make estimations. What do I think this other player has based on
their betting and based on the cards that have come out so
far? And it's a really hard
problem for AI.
It is yet another game that has
fallen to AIs. And
I talk in the book about Libratus,
the first AI that was able to achieve
superhuman performance in head-to-head,
Texas Hold'em, and then Pluribus,
which actually could do this against multiple players,
which is way harder from a computational standpoint,
because now there's way more factors.
And the really wild thing to me about this was that
when you think about what it would take to achieve superhuman performance in poker,
you think you would need something like a theory of mind,
understanding, okay, this other player, what are they thinking about?
Are they bluffing? Turns out, actually, you don this other player, you know, what are they thinking about? You know, are they bluffing?
Turns out, actually, you don't need any of that.
You just need to be really, really good at probabilities.
And the AI is able to do that and to beat the best players in the world.
Wow.
Wow.
I'd like to see it do a game of Blackjack 21.
Definitely be banned at the, that'd be an easy one for it to do that.
But yeah, that is interesting.
And you tied that into your experience in Iraq, I guess it was, maybe it was Afghanistan, but imagine Iraq with
IEDs and how people would try to guess which path would be least likely to hit an IED. Talk a little
bit about that and how the application of this ability to scope stuff out and probabilities and poker, how that applies to
a real world situation like that. Yeah. So I tell the story in the book about sort of what is, you
know, how might these tools that are valuable in poker be used for warfare in a variety of ways?
And in fact, the company or the researchers rather that built the Labratus, the system that
achieves superhuman performance in poker, they now have a defense startup and they're doing work with the defense
department trying to take this technology and apply it to military applications so i talk about
some of the things that i saw in iraq during the war there where you're worried about ids
roadside bombs being on the side of the road and i would have discussions with other soldiers about okay
what's the what's the strategy here right do you swerve from side to side to keep them guessing
where you're going to be do you drive down the middle if you see a pothole do you drive around
the pothole right to avoid it because there might be an id hidden in the pothole or is you know they
know you're going to drive around the pothole and then if you go around it there might be a bomb on the side of the road and you should drive through it.
And there's not like a good answer to these things that soldiers talk about when they're when they're in the war and trying to figure out what to do.
But one of the things that's really compelling about this technology is it might give militaries the ability to be more strategic and instead of apply sort of like you know just just guesswork which is basically what we were doing to then apply a little more of a
rigorous strategic approach to keep the enemy constantly guessing it's interesting you know
in your in your book uh you point out how the ai and some of these war games was super aggressive
always on the attack never tired never exhausted my exhausted. My son said in Terminator, the Terminator would block blows from humans.
And actually, I wouldn't do this.
It's not a threat.
It would take the blow and immediately kill the person.
You know, that's that's a but it is very different in the way that it fights.
And people are saying this is going to change everything as it gets onto the battlefield,
isn't it?
Well, that's what's amazing is, you know, I talked about how this AI dogfighting agent
fights differently than human pilots and uses different tactics. That's true across all of
these games. So the AI system that plays poker, it actually uses different betting strategies
than human poker players. That's also true in chess, in Go, in real-time computer
strategy games like StarCraft 2 and Dota 2. We have these simulated battlefields with different
units. And there are some commonalities actually across how the AI systems are different than
humans across all of these games. And so one of them is that in some of these computer games,
where these AI agents are fighting against the human units, the human players talk about the AI's exhibiting superhuman levels of aggressiveness,
that they constantly feel pressured all the time in the game because there'll be these little
skirmishes among these units. And then for humans, the battle's over and they have to turn their
attention elsewhere. And then they look to a different part of the game and they figure out,
okay, what am I going to do over here now? And the AI can look at the whole game at the same time and it doesn't need to take a break. It doesn't need to turn They have to sleep. They have to eat. They have to, you know, go reload their ammunition.
They have to focus their attention and say, okay, what are we going to do next?
The AI doesn't have those challenges.
It's not going to get tired.
It's not going to be emotionally stressed.
And so we could see not just the AI is changing the tactics of warfare in the future, but even the psychology.
Wow.
Yeah.
You go back and you look at World War I, the trench warfare,
people waiting long periods of time, and then it would be –
I've heard many people say war is these long periods of boredom
where nothing happens and then sheer terror, that type of thing.
And even going back to the Civil War, I mean, they would even fight seasonably,
right, would take the winter off or something like that.
So the pace of all this stuff has been accelerating, fight seasonably, right? You know, we take the winter off or something like that. But so the
pace of all this stuff has been accelerating, but now with AI involved, it really puts the pedal to
the metal. And I want to talk about the four different battlegrounds here and a little bit
about deep learning. But before we do, you've also talked about the ethics of some of these things,
things like, will it surrender?
It sounds like it's pretty aggressive.
And will it recognize surrender, I should say?
Will it recognize surrender or will it just keep coming?
And that's one of the ethical issues about this.
What do we do in terms of trying to keep control of this, even on a battlefield,
so that it doesn't get out of control and just keep going even?
Does it recognize that it wins even? Right. And this is a central problem in AI,
whether we're talking about a chatbot like chat GPT or Bing or a military AI system,
where the consequences could be much more severe. How do we make sure that these systems are going
to do what we want them to do? How do we maintain control over them? Some Chinese scholars have
hypothesized about this idea of a singularity on the battlefield. At some point in time in the
future, where the pace of AI-driven combat exceeds humans' ability to keep up, and militaries have
to effectively turn over the keys to machines just to be effective. And that is a very troubling
prospect, because then how do you control escalation yeah how do you end
right if it's happening at superhuman speed yeah yeah and there's no answers to that right now
that's the thing there are no good answers yeah yeah this is hanging over our heads and this
technology again it's uh you know we can't have an ai gap so everybody's working along these lines
it's one of the things that reminded me as I read your book,
it reminds me of Michael Crichton and the reason that he wrote Jurassic Park was to awaken people to how rapidly genetic technology was changing
and the fact that people were not talking about it in terms of how to control this
or the ethics involved in it.
It's just like, can we do this and just run with it?
And it seems like we're getting in that situation with this as well. Let's talk again, before we get into the
four battlegrounds, the whole idea of swarms of hundreds of thousands of drones, as my son said,
nothing good ever comes in a swarm. So this aspect of it, have you ever read the book
Kill Decision by Daniel Suarez? It's back in 2012. It's ever read the, um, the book, uh, kill decision by Daniel Suarez is back in
2012.
It's kind of the theme of that, where they had come up with swarms.
Are you familiar with that?
That's my, my take.
Well, but yes, that's a great book.
Yeah.
And so, so where are we in, you know, that kind of scenario where you've got this
massive swarm of, of, uh, you know, killer drones that are communicating with each
other.
We're not going to get into how they communicate,
but it basically is kind of following on an insect model.
Is there a defense against that?
Is that something that is in his book essentially made ships obsolete,
made all the conventional weapons obsolete,
and the military industrial complex had to reset the board
and make all new weapons, and they liked that.
Yeah. Well, I mean, I think we're're not there yet but i do think it's coming so right now today drones are largely remotely controlled there's a human on the other end if not directly flying the drone
by a joystick at least telling the drone where to go giving it the gps coordinates and then the
drone goes there um and generally speaking there's like one person to one drone,
but that's limited because that means that for every drone you put on the
battlefield, you need a person behind it. And people are expensive.
People are limited.
And so this idea of swarming is that now you can have one person controlling
many drones, tens, hundreds, thousands of drones all at the same time.
And the human obviously is not telling each drone where to go.
They're just telling the swarm what to do.
So telling the swarm, go conduct reconnaissance
or look over this area, find the enemy and attack them.
Or it could be for logistics, right?
Resupply our troops, give the troops the ammunition
and supplies that they need.
And the swarm figures all that out on its own
by these individual drones,
or there could be robotic
units on the ground or undersea, autonomously coordinating with one another. It is likely to
be a major paradigm shift in warfare, a huge shift in what militaries call command and control,
the way that militaries organize themselves. So we're not there yet. Most of the systems today,
pretty remotely controlled, little bits of autonomy, but that's likely the path that this is taking us, and it's going to transform warfare in very significant ways. I think the first one they had was autonomous cars. But they've had some, one of them, intelligent UAV swarm challenge.
Tell us a little bit about that and how that turned out.
So we're seeing the U.S. military and the Chinese military swarm demonstrations where they'll take swarms out to the desert somewhere and drop them off of an airplane and swarming drones and have them coordinated together.
China is doing the same. So they're taking a page from what the U.S. is doing.
They're often following up with experiments of their own.
And the really difficult thing for the U.S. military is this technology is so widely available.
So, for example, we're already seeing drones used in Ukraine, commercially available drones.
There are some military ones coming from Iran and Turkey, but also commercially available
drones like you could buy online for a few hundred dollars.
And civilians are using them.
They're using them to assist the Ukrainian military.
And in some cases, we've even seen artificial intelligence integrated into these drones.
So AI-based image classifiers that can identify tanks, for example, and find them using AI.
And so just the widespread nature of AI and autonomy is a real challenge for militaries.
Think about how do you control this technology?
Huge problem for the U.S. military because all of the U..s's advantages are negated when anyone else has access to this wow yeah
that's and it's kind of interesting that they're being used for you know mainly reconnaissance
like we saw you know that was one of the key things that uh early planes were used for in
world war one was mainly reconnaissance before that they had you know reconnaissance balloons
been civil war and that type of thing.
Then eventually they start dropping small munitions and then it's on.
And so it's going to escalate much faster with that.
One of the things that you've talked about is, again, in terms of the AI running away from us, you talk about a flash crash of stocks.
Talk about what that would look like with a flash war.
You know, we've got circuit breakers for the stock market.
You know, what do we do for that?
Again, you know, what is the problem?
Define the problem.
Right.
So, you know, the essence of the problem is how do you control operations going on at
machine speed and in a competitive environment?
So we envision what this
might look like in warfare. So our machines are operating at machine speed faster than humans can
keep up. Their machines are doing the same. They're interacting. We're not going to share
our algorithms with adversaries. They're not going to share their algorithms with us. There's this
potential for these unexpected interactions. Things to spiral out of control. Well, we've seen this.
Actually, we've seen this in stock trading, where there are algorithms executing trades in milliseconds far faster
than humans can respond. And we've had accidents like these flash crashes, where the algorithms
interact in some unexpected way with market conditions, in these rapid movements in the
price. And the way that regulators have dealt with this in the financial system is they put
in these circuit breakers you talked about.
They take a stock offline.
The price moves too quickly in a very short period of time.
But there's no referee to call timeout in.
So who's the regulator?
There's nobody.
And so if you're going to have some kind of human circuit breaker, that's something that militaries have to do on their own.
Or they have to work with competitors to agree to do that, which is, needless to say, that's
really hard to do.
Yeah, not too likely to happen.
That is a very concerning circumstance.
Again, as you point out, it's a great analogy in the stock market.
We've already seen how that works, but there is no referee in a war.
Talk a little bit about the non-belligerent use of artificial intelligence other than as killing machines.
So AI is a widespread, multi-use technology.
We're seeing AI integrated into any aspect of society, in medicine, in finance, in transportation.
One of the really troubling applications that I talk about in the book is the use of AI for domestic surveillance.
And we've seen this really extreme implementation of this inside China,
where half of the world's 1 billion surveillance cameras are in China.
Yes.
And the Chinese Communist Party is building up this really dystopian model of this tech-enabled
authoritarianism. Because if you've got half a billion cameras, how are you going to monitor that?
We'll use AI.
And they're using AI for facial recognition,
gate recognition, voice recognition,
tracking people's movements. In some cases for really trivial infractions,
facial recognition being used to go after people for jaywalking,
using too much toilet paper in public restrooms,
but also of course to go after political dissidents
and to clamp down on control that the Chinese Communist Party has
and to repress its citizens and minorities.
Hang on right there.
I want to show people this little clip.
I know you can't see it there.
This is actually a China restaurant.
And in order to get toilet paper,
the guy has to go up to a screen,
and it gets a facial scan of him. And then it spits out just a little bit of toilet paper, the guy has to go up to a screen and it gets a facial scan of him.
And then it spits out just a little bit of toilet paper.
But that's the state of where this is.
I mean, this is kind of where it hits the fan, isn't it?
I mean, it's even for that.
And perhaps they're going to grab his DNA.
Who knows?
This is the toilet paper.
You talked about going to China and um i don't know
what year you went to china it was a very different situation from when my family went about 2000 what
was it 2005 2006 and um now you talk about what it's like coming into the country what do they do
when you come in to the country now tell people sure so Sure. So I did several trips to China just before actually COVID
hit. I was able to get in there before all the restrictions came down and got to see firsthand
how a lot of AI technology is being employed by the Chinese Communist Party to surveil its citizens.
So one of the first things that happens is you get your face scanned when you come through
into the country and it gets recorded in their database.
Now, I'll point out that also happens at many border checkpoints here in the U.S.
Yeah, it's rolling out in the TSA now, yeah.
That's right.
So when I came back through Dulles Airport in Washington, D.C.,
I also got my face scanned.
Now, what are some of the differences, right?
So same technology, but it's being used, same application
that is to check that people are who they say they are, but under very different kinds of
political structures and governance regimes. So here in the US, there are laws that govern how
the government can do that. They're set by the elected representatives, by the people.
There's also a lot more transparency here in the U.S. So when I walk through a border checkpoint in the U.S., there are signs that say,
we're going to collect your facial record, your face, and we're storing it in a database. It
tells you for how long that information is going to be stored, gives you a link you can go online
to get more information on the website. And in fact, the first place I learned about this wasn't
going through a checkpoint in the U.S. It was reading about in the Washington Post.
So the fact that we have independent media in the US also, you know, a way to have more
checks and balances and government power and authority, none of which exists in China.
And that to me just really highlights, it's not about the technology.
It's about how we use it.
And are we going to use it to protect human freedom or the Chinese model to crush human freedom?
Yeah, it's hard power versus soft power.
Soft power is going to be coming from our dedication to the rule of law, to individual liberty, to those types of things.
And the problem is that it's getting to the point now where if they want to collect your face, uh, facial information in order to fly, uh, they may tell you all about it, but if you don't want
to have your facial scan done, maybe you won't fly and that'll be your choice. You don't get to
fly, but, uh, we'll, we'll tell you we're going to do this. And so it's that kind of level of
coercion that kind of has, uh, you know, the pretense of, of choice with it. I I'm very
concerned that we're just a
couple of half steps behind the chinese and that most people in this country as well as elected
representatives most people are sleepwalking through it most elected representatives don't
really have it on their uh on their uh you know what they're looking at uh but talk a little bit
about uh what is happening and against in the area that they are so focused on, the Uyghur area, and as they are looking at that particular population, how they weaponized it there.
So China in particular, the most sort of extreme version of this techno-dystopian model that China's building is in Xinjiang, where China has been very active in repressing the Uyghurs there as part of
a mass campaign of repression against them, including imprisonment, home confinement.
And then throughout the area in the major cities, a series of police checkpoints
that dot the cities every few hundred meters that check people via facial recognition, gate recognition, that scan their phones, that use biometric databases,
all to track the movements of these citizens
and where they're going.
So for example, if someone drives through an area,
a camera checking the license plate on the car
and then seeking that to other data like the person's face
or their geolocation data for their phone
and saying, okay, is this a person who owns the car?
And if not, bam, you get flagged and the government's going to come take a look at you.
And, you know, it's all part of this model.
The Chinese Communist Party is built to control every aspect of its citizens movements, because
if you can control how much toilet paper people are using, then you're not going to have people
rising up against the government.
That's right. Yeah. And of course, as I've said, we look at central bank digital currency,
that gets us there really fast. But these other aspects, constant surveillance,
geospatial intelligence, even being used to anticipate where people are going to go,
anticipatory intelligence, talk a little bit about that, what people typically think of as
pre-crime from a minority report. Talk about how they are pulling
all this data together, data mining it, and making decisions about what you're going to do in the
future and who their suspects are going to be. That's right. So one of the things that they
built is a platform for looking at people's behavior, tracking it. China's put together a social
credit system, scoring people based on activities that they're doing, including sometimes trivial
infractions like not sorting the recycling. That might get you docked points to try to shape
people's behavior. And then also trying to anticipate where they might find something
that looks suspicious. So if someone books a hotel room on their credit card
in the same city that they live in,
that gets flagged by the police
and the new police cloud database
that many police departments in major cities
and provinces are building in China.
Well, they'll say, okay, well, that's suspicious.
What are you doing?
We're going to look at you, looking at geolocation data.
So if they see a person is going to be in an internet cafe
the same time as another person
multiple times during the week, they're linking these people and saying, okay, what's going on
between them, trying to ferret out any kind of behavior that the party might see as a threat to
it. Yeah. And that's the thing that's very concerning. And of course, the reason you're
talking about this is because it's artificial intelligence that allows them to be able to
make these correlations and to sort through just a staggering amount of information.
If we go back and we look at the Stasi, they were keeping track of everybody.
And you point out that they put in some Han Chinese in the Uyghur area to be informants.
But that's nothing compared to all the biometric surveillance and the artificial intelligence and how they can put that stuff together.
You know, they had so much information.
Everybody was spying.
More than half the people were spies and informants on the other less than half of the people.
And yet they didn't have a way to put that stuff together.
That's the kind of leverage that this technology now gives to dictators, right?
That's what's chilling about it.
It allows this surveillance at a scale that's
not possible with humans. And it's not just that AI can be used for repression.
Lots of technologies can be used for oppression. A police baton can be used for oppression.
It's the fact that AI can enhance the system of oppression itself and further entrench it
so that it's even harder for citizens to rise up against the government. So it's not that the Chinese Communist Party is just using this to crack down and find
the dissidents if there's another Tiananmen Square protest in the future.
I walked through Tiananmen Square, surveillance cameras everywhere, as you might expect.
I estimated about 200 cameras across the square at every poll, watching every single movement.
It's the goal really for the party is making sure that the dissidents
never even make it to the square.
Yeah, yeah.
Yeah, I imagine if you did something there in Tiananmen Square
that indicated that you were concerned about that,
that would really put you on their list for sure.
Talk a little bit about Sharp Eyes.
This is something that came out about 2015.
I remember when this program came out.
Talk about the SharpEyes initiative in China.
So China's been steadily building components of this digital infrastructure to control its population.
So one of the first components of this was the Great Firewall, firewalling off information inside China.
There's a propaganda component of this. But increasingly, with programs
like Skynet and Sharp Eyes, China has been creating the physical infrastructure as well.
So not just controlling information, but now controlling physical space. So Sharp Eyes is a
massive government program to build out surveillance cameras in every aspect of China so that every
single place is covered. Bus stations, train stations, airplanes, hotels, banks, grocery stores, every kind of public
area is surveilled so that any place someone goes inside China, there's a camera watching
them and tracking their movements.
And you mentioned Skynet.
You mentioned in the book that they didn't name it after the terminator but it's kind of a
transliteration of what they've got uh so it's uh but it's essentially going to be the same thing i
guess that once they hook it up with some military equipment let's talk about the four battlegrounds
because that's what your your book lays out and your book is set up primarily for people who are
in the military i think to look at the uh you at where we are relative to China in terms of,
because you don't really talk that much about Russia. You do have a quote at the beginning
from both Xi Jinping and from Putin about the importance of artificial intelligence,
but the real threat seems to be coming from China in this. And so you look at this from a power
standpoint, and you talk about four
different areas. Talk about the first one, data. Sure. So how can the U.S. stay ahead of China in
this really critical technology? Well, data is essential. Data is essentially the fuel for
machine learning systems. Machine learning systems are trained on data. Now, it's often said, or people might have
this impression that China has an advantage in data because they have half a billion surveillance
cameras. They're collecting data on their citizens. When I dove into this, my conclusion
ultimately was that that's not true, that China doesn't have an advantage in data for a couple
reasons. One is that what matters more than the population size of a country
is the user base of these tech companies. So China's got a bigger population than the U.S.
or Europe. There's more people. They're going to collect more data on their citizens.
But U.S. tech companies aren't confined to the United States. So platforms like Facebook and
YouTube have over 2 billion global users each. Whereas in fact, China's
WeChat has only 1.2 billion users. And other than TikTok, Chinese companies have really struggled
to make it outside of China and break into the global marketplace. So that's an area where the
population turns out to be not really an advantage for China. In fact, the US probably has advantages
in global reach of these companies. Another reason why people think that China might
have an advantage is because the Chinese government's doing all the surveillance. Well,
it turns out that the Chinese government doesn't let Chinese companies necessarily do that same
level of surveillance. So the Chinese Communist Party is actually pretty restrictive about who
gets its spying powers. They don't want Chinese companies to have the same spying powers that
they do.
And they've been passing consumer data privacy laws.
So even though there's no regulations inside China on what the government can do, they actually are passing regulations on what Chinese companies can do to Chinese consumers.
So those same spying powers don't necessarily exist on the corporate side. Whereas, of course, in the U.S., U.S. consumers have actually acquiesced a fair amount to this sort of model of corporate surveillance of U.S. tech companies hoovering up lots of their personal data without a lot of pushback, grumbling, but there's no federal data privacy regulations.
And so a lot of these things… We've said for the longest time, if it's free, you are the data.
You're the product, right?
Your data is the product.
And that really underscores how much better they're able to get that information from people just by providing a free product.
And we give them all the information about ourselves.
That's right.
So we actually are giving up a ton of information voluntarily, at least to companies, if not to the government.
And so I'm not sure
that China actually has an advantage here. I think both countries are going to have
access to ample data. The more important thing is going to be building pipelines within companies
or their militaries to take this data, to harness it, to clean it up, to turn it to useful AI
applications. Yeah. Talk a little bit about how that is used by AI,
why data is so important.
As you mentioned,
people said data is the new oil or whatever because of machine learning.
Tell people why the,
why there's so much concern and emphasis on the quantity of data that they've
been able to collect about us.
How's that used?
Yeah.
So as I'm sure,
as I'm sure people are aware,
it's why we're having this conversation,
part of it is there's been this huge explosion
in artificial intelligence in the last decade.
And we've seen tremendous progress
through what's called the deep learning revolution.
So not all of AI, we talked about poker,
it doesn't use machine learning,
but a lot of the progress right now
is using machine learning
and a type of machine learning called deep learning that uses deep neural networks, which are a connectionist paradigm
that are loosely modeled on human brains. And in machine learning, rather than have a set of rules
that are written down by human experts about what the AI should do, and that's how, for example,
like a commercial airplane autopilot functions, a set of rules for what the airplane should do in any given circumstance.
Machine learning doesn't work that way.
And instead, the algorithm is trained on data.
And so people can take data of some kind of behavior and then train this AI system.
For example, on faces.
If you have enough pictures of people's faces and then they're labeled with those people's names, you can feed that into a neural network and it can learn to identify who people are based on really subtle patterns in the faces.
The same way that we do.
Really subconsciously, not even thinking about it.
We can identify faces.
And the thing is you need massive amounts of data.
So AI systems that do image classification, for example, that identify objects based on images
use databases with millions of images. Text models like ChatGPT or Bing use hundreds of gigabytes of
text. In fact, a good portion of the text on the internet. And so having large amounts of data and
having it ready to train these systems is really foundational to using AI effectively.
Yeah. One of the examples that you have
is being able to distinguish between an apple and a tomato.
Talk a little bit about that.
So if you think about a rule-based system,
the old model of AI,
how would you build a rule-based system
to tell the difference between an apple and a tomato?
So they're both, right, they're both round,
they're red, sometimes green, they're shiny.
Maybe they have a green stem on top.
Like if you're trying to tell the difference to someone who'd never seen one before, that's actually kind of tricky to do.
Yeah.
But they look different.
And in fact, a toddler can tell the difference between them if they've seen both of them.
And it turns out that, you know, building a rule by system for AI to tell the difference is really hard. But if you feed enough labeled images of apples and tomatoes to a machine learning system, it can just learn to tell the difference.
The same way that humans do based on all of these subtle cues about the texture and the shape and how they're different.
And so that's a great example of these kinds of problems that AI is really powerful for using machine learning.
Yeah. You know, when we look at generative AI, the AI that people are using so much for artwork
and that type of thing, and you compare it to the chat programs that we've seen and the real
colorful episodes that people had as they were working with it, you know, it's the same type
of thing, essentially. They're able to create this interesting
artwork because they've got so many different images that they have seen and just pull these
elements together but that's exactly what they're doing with the chat when it goes off the deep end
as well they're they've they've had all of this massive amount of conversation and you know scripts
or whatever novels and they're able to pull that kind of stuff together
just like they pull together the interesting elements of artwork
to make something that's different.
Isn't that a good analogy, or what do you say?
Oh, absolutely.
They're doing essentially the exact same thing,
just one with images and one with text,
where you've seen this explosion in generative AI like ChatGPT,
like these AI art generators.
They're really, really powerful.
And they're not actually sort of copying
and pasting from the database.
What they do is they have a model
that's trained on these massive databases
of images or text.
And then what happens is they build a statistical model
of statistically associations of text or associations of pixels and what an image looks like.
And then with a prompt, if you're talking to, say, chat GPT or to Bing, you start having a conversation, you give it a prompt, and then it's going to spit back a response.
And almost all of the really weird stuff that these language models are doing, when you think about it, it's modeling something that exists on the internet.
So these models, they can get argumentative, they're arguing with users, they're trying to
deceive them. One case, the model is telling this user that it's in love with him and he should
leave his wife. Well, all of it seems like really loony behavior, but there's all that stuff on the
internet. There's all sorts of weird, wacky things on the internet.
So it's learned based on this text on the internet, those kinds of behaviors.
And then it's no surprise that it spits them back at us when we prompted to do so.
Yeah.
Even coming up with a kind of a how scenario, like from 2001, you know, I was watching these
people on the cameras.
They didn't know I was watching them on the cameras, that type of thing.
Yeah. It strikes me as we're talking about the cameras, that type of thing. Yeah.
It strikes me as we're talking about the importance and I don't really understand.
How these machine learning models work.
I mean, I've just come after this from a procedural standpoint, you know,
it was an engineering and programming.
Uh, so I don't really understand how these things can assimilate this and
build these models from looking at, uh, you know, pictures, a lot of pictures
of tomatoes and apples and everything, but, uh, they do it somehow, but the key thing with all this appears to be the data.
And so I was wondering, because I I've been wondering why there's so much fear and concern
about Tik TOK, uh, with various people. And I know part of it is that, you know, it's,
it's going to be able, it's going to be easier to scrape this data off of – if they own the platform, they can get the data more easily than they could if they were just trying to scrape it off publicly because everything on Facebook and all the social media is out there publicly. about this, I imagine besides getting information about interesting individuals might be the larger
access to, you know, having that big platform of data, because you're talking about, you know,
feeding as, as kind of a strategic resource for nations, the fact that you can get this stuff from
Facebook or other things to feed into your artificial intelligence. Is that part of it,
you think with a TikTok? Absolutely. Data is part of it.
And then the algorithm behind TikTok is another big part of it.
So TikTok looks really innocuous.
I do think it's a major threat to U.S. national security,
not because the platform itself is a problem,
because the ownership is a problem.
Because the company is owned by a Chinese company,
it's ultimately beholden to the
Chinese Communist Party. And so one of the problems is that the app could be used to take people's
personal data. So it's on your phone. Your phone will sometimes ask for permission. Oh, this app
can access other information about you, your location, can access other apps. And you know,
I'll be honest, like myself, maybe a lot of people just, okay, allow, sure, right? But then all of a sudden that app's grabbing all sorts of information,
maybe your contact list, maybe it's grabbing your geolocation, maybe it's seeing what you're doing
with other apps, and it's sending it back. And in the case of TikTok, if the Chinese Communist
Party says, we need access to that data, company has no choice. If they say no, they go to jail.
So when the FBI told Apple, you need to unlock this phone, Apple fought the FBI. They fought
them in court, and they fought them in the court of public opinion. And neither of those things
exist inside China. A Chinese company can't fight against the government in that same way.
They don't have any kind of freedom from the government. And so that's the main problem, but it's also the algorithm behind this information. Because in
TikTok, that's true for all the social media platforms, true for Facebook and Twitter and
YouTube. Right. Yeah. Facebook does it to us. We just, you know, occasionally they will push back
against the government, but for the most part, they're going to do what the government wants
to do. And they're grabbing all that stuff on as well, right?
Right.
So for all these platforms,
they're feeding you information based on this algorithm saying,
okay, we think you should look at this information.
And companies are all very opaque about this.
They're not very transparent about what's in the algorithm.
There's been a lot of controversy about many of the U.S. platforms
that maybe they're pushing people towards more extremist content.
The problem with TikTok in particular is that this algorithm could be a vehicle for censoring
information. And in fact, it has been. And in fact, there's been leaks coming out of TikTok
that shows their internal censorship guidelines. That's been leaked. We've seen it. We've seen
actually their guidelines. And TikTok has said they would censor political content.
So anything that might be offensive to the Chinese Communist Party, something about the Tiananmen Square massacre, that's censored.
And so that's a real problem when we think about this is an information environment that Americans using.
This would be like the Chinese Communist Party owning a major cable news network in the United States.
That's a real threat to U.S. national security, and we have to find ways to address it.
Sure.
Yeah.
It's kind of like what we saw with the Twitter files.
You know, we saw how at the beck and call of officials and government that they would
censor or they would give them information on people.
And of course, we see the same thing when we look at 5G.
You know, they're concerned about Huawei because the Chinese government's going to use it to surveil us.
But again, our government is going to use the, uh, the other 5g that's made by our companies
to surveil us as well.
Uh, talk a little bit about, um, you know, while we're on data, uh, the, the issue of
synthetic data, cause I thought it was interesting as I mentioned earlier, you know, the, the
first, uh, competition that DARPA had was the self-driving cars.
And in your book, you talk about the fact that Waymo, the number of miles that they've driven and then how they've synthesized this data.
Talk a little bit about that.
Sure.
So synthetic data is AI-generated data.
That could be AI-generated text like some out of chat GPT. It could be AI-generated artwork. But it's also a tool that companies can use in building more robust AI systems. So self-driving car companies, for example, are collecting data driving on the roads. They have the cars that are driving around with all the sensors and all the cameras, and they're scooping up data as they're driving around, but they're also using
synthetic data in simulations. So Waymo's talked about they're collecting data on roads, but they're
also running simulations. I think they've done 10 million miles on roads, collecting up data.
And I think it's 10 million miles a day they've said that they're doing in simulation. So they're
able to supplement with many orders of magnitude more because they can run these simulations at accelerated speed.
And so now if there's a situation, they see where there's a car, there's a new situation on the highway they've never seen before.
Car cuts them off, does something weird.
They capture that data, they put it in a simulation.
Now they can rerun it different times of day, different lighting conditions, different weather conditions.
And then all of that can make the car more robust and more safe.
So it can be a really valuable tool as a supplement to real-world data, or in some cases, just as a complete replacement.
And this is what the Alpha Dogfight did.
That AI agent was trained of 30 years of time in a simulation.
So synthetic data in a simulation
teaching him how to perform a task. That's interesting. And, you know, when we look at it,
you, as you point out, 10 million driving miles every single day at 10 billion simulated miles as
of 2020. And yet, you know, we look at this and some skeptics of AI are talking about the fact that
we've gone through a couple of different waves of AI where everybody was excited about it and then
things didn't pan out and it dropped off. And we've, we're now like the third time of that.
We've just had Waymo lay off 8% of their labor force and they're having a problem with that.
It was in San Francisco. I don't know if it, I think it was cruise, maybe, maybe not Waymo where their, their
vehicles all went to one intersection and blocked it, you know?
So, you know, there, there are certain hangups like this that are happening, but, uh, even
in San Francisco where Waymo is headquartered, they were all very upset about the, um, the
fact that the cars are moving slow.
They were having difficulty.
You know, if you've got a situation at a four-way stop or something,
they have difficulty negotiating with the humans as to who's going to go next.
And so they just sit there.
Talk about that.
Is that showing a real Achilles heel for artificial intelligence,
what we're seeing in a self-driving car?
Oh, absolutely.
I mean, we're talking about all the amazing things that AI can do,
but it's worth keeping in mind that a lot of the things we're talking about are really narrow,
like playing Go or poker or even generating art images.
And humans have the ability to perform all of these different tasks, right?
So humans can write an essay.
They can make a painting, maybe not a great one, but they can do it. They can, you know, use a camera to take a picture. They can get in a data, they might do something super weird. And that's a big problem for self-driving cars, because
you need a self-driving car that's good, not just some of the time, not just 80% of the time or 90%,
but the right that's good all the time. It's safer than humans. I think we'll get there
eventually, but we're seeing the self-driving cars, how hard that is out in the real world
in an unconstrained
environment. And the human brain, for now, remains the most advanced cognitive processing system on
the planet. And so when we think about using AI, there are going to be some tasks where we might
be able to use AI instead of people. But people are still going to need to be involved in all
sorts of aspects of our society because humans have the ability to take a step back, look at the bigger picture, understand the context, apply judgment in a way that even the
best AI systems can't do. Yeah. And you know, when you look at it in terms of the self-driving car,
you know, you got the different levels of driving ability. Five is fully autonomous. Four is like,
we're doing most of it for you, but if it's an emergency, we're going to kick control back to you.
And of course, that's a really dangerous one because typically at that point in time, the person is fast asleep or playing a video game or whatever.
And it's like, you know, here, take this, take the wheel right now.
And so, you know, when we see that, I would imagine that's really the big issue.
You know, we started talking about the dogfight.
I imagine that's the really big issue with the pilots. You know, it's like, oh, okay, now we're in a tight spot here.
It's up to you now. I can't handle it. I'm going to kick it back to the pilot. I mean,
I'm sure that's the issue with them as well, right? That's a huge problem. It's a huge problem
because right now, you know, if you had this AI, can you do some things, but not everything?
How do you balance what the AI does and what the human does?
And what we often do, which is a terrible approach, like you're saying, is we can have the AI do as much as it can, and then we expect the human to fill in the gaps.
And that leads to situations that are just not realistic for humans.
So the idea that someone's going to be sitting in this car, going on the highway at 70 miles an hour, not paying attention because
the AI is driving. And then in a split second, the human's going to realize, uh-oh, something's
wrong. I need to take control, see what's happening, grab control of the steering wheels to the car.
It's not realistic. Humans can't do that. And so we need a model for human machines working
together that also works for human psychology. And in fact, one of the things that this DARPA program is doing with putting an AI in the
cockpit is looking at things like pilot trust.
And in fact, what they're doing is now they're taking these AI systems, they're on a simulators,
they're putting them in real world F-16 aircraft.
They're flying them up in the sky.
The AI is doing maneuvering of a real airplane.
And that itself is challenging.
Moving from a
simulator to the real world because the real world's a lot more complicated than a simulator
but they're also looking at what's the pilot doing so they've instrumented the whole cockpit
and they're looking at things like tracking up what the what's the pilot looking at
why is the pilot looking at the map and thinking about the higher level mission which is what we
want the pilot doing there's the pilot looking at the controls,
trying to figure out what the AI is doing,
looking out the window because the pilot doesn't trust the AI.
And getting to that level of trust,
getting to that seamless coordination
between humans and AI
is going to be really important
to using AI effectively.
Well, let's talk about the other three battlegrounds.
We talked about data.
The next one is compute.
Tell people what that represents.
So compute means
computing hardware or chips
that machine learning systems
run on. So
machine learning systems are trained on data.
They're trained using computing hardware
or computing chips, sometimes massive
amounts of computing infrastructure.
And for a large language model like ChatGPT, it's trained on hundreds of gigabytes of text,
often trained for thousands of specialized AI chips, like graphics processing units or
GPUs, running for weeks at a time, churning through all this data, training them up.
If data is a relatively level playing field between the US and China, and hardware,
and computing power, or it's sometimes called compute, the US has a tremendous advantage.
Because while the global semiconductor supply chains, they're very globalized,
they fall through a number of countries. And in fact, the most advanced chips are not made in the
US. 0% of the most advanced chips in the world are made here in the United States. They depend on US technology.
And they're made using technology, tooling, and software from US companies.
And it gives the US control over key choke points in the semiconductor supply chain.
And the US has used this to deny China access to semiconductor technology when it was
strategically advantaged to the United States.
The U.S. did this to Huawei. When it turned off Huawei's access to the most advanced 5G chips,
they weren't made in America, they were made in Taiwan, but they were made using U.S. equipment.
And so the U.S. said, using export control regulations to Taiwan, you're not allowed to
export any chips to China of this certain type to Huawei that are
made using U.S. equipment. And now the U.S. has done this actually across the board. Biden
administration put this out in October, very sweeping export controls to China on semiconductor
technology and the most advanced AI chips. And then on the equipment, this is really critical
for China to make its own chips, holding back China's own domestic production.
Yeah, that's changed quite a bit since I was a young engineer.
We had, you know, the state of the art in terms of geometries,
they were unable to domestically here, the company I worked for,
was unable to do it here.
All of their yield was coming out of Japan. They were able to do it. But we had, in terms of commodity products,
that had already been seeded 40 years ago to offshore sources, but we had kind of the lock on CPUs and things like that. That now has changed, as you pointed out. And I was surprised
to see that in the book, that pretty much all the sophisticated chips are coming out of Taiwan. You said Taiwan
has 90% of the most advanced chips in the world made in Taiwan. And so that's one of the things
that we're looking at here with China and Taiwan that is extremely important and why I think that's
going to be a source of conflict,
flashpoint, all the rest of the stuff,
why we're seeing this tension build up there
as the Chinese are moving towards Taiwan.
It's because of the advanced chips there
and how it is really kind of at the center
of the state of the art of the semiconductor industry,
whereas we've just kind of got a few choke points
here and there in the semiconductor industry.
They've got the
big foundries as well as the most advanced foundries there, right?
Absolutely. So 90% of the world's most advanced chips are made in Taiwan, as you said.
And that's a real problem when we think about security of supply chains, because Taiwan's an
island 100 miles off the coast of China. The Chinese Communist Party has pledged to absorb by force if necessary.
So Taiwanese independence, protecting Taiwan is critically important
and finding ways to ensure that China doesn't engage in that military aggression
as important political and economic and military reasons.
Yeah, yeah.
And that's important to understand as people look at this conflict building up,
the strategic interest that the U.S. perceives in this.
And as you point out,
I thought it was kind of interesting,
you know, looking at Moore's law,
very familiar with that,
the computing that the chips
would increase an exponential rate,
doubling every couple of years.
But you pointed out that there's another law that I had not heard of,
Rock's Law, that semiconductor fabrication doubles every four years,
and that computer usage, because of all this deep learning stuff,
is doubling every six months.
So it's outpacing it.
But the cost of the semiconductor manufacturing facilities
is causing an amazing
concentration because of the capital cost involved in putting up these state-of-the-art
facilities and foundries.
That's right.
So the technology that's used in making these most advanced chips is simply unbelievable.
It's some of the most advanced, difficult technologies on the planet.
And as
the costs continue to go up, so a leading edge foundry might cost anywhere from $20 to $40
billion to build that foundry using the most state-of-the-art technology. What we've seen,
of course, as a result of these market pressures and rising costs is the number of companies
operating at the leading nodes of semiconductor fabrication
has continued to shrink. And so we've seen at the most leading edge now, it's now just two
companies really, TSMC and Samsung. On the equipment side, there are some companies that
have a sole monopoly. So for the equipment that's used to make the most advanced chips,
there's one company in the world, a Dutch company, ASML, that makes the equipment needed to make those chips.
And these concentrations of the supply chain give the U.S. and allies unique elements of control over who gets access to this critical resource, the computing hardware that's needed for the most advanced AI capabilities. And of course, this, uh, complicated, complex, uh, distribution of the supply chain
is something that is very worrying as we move towards the future. We just, the lifestyle that
we have and the things that we are, are just strung out all over the planet. And it is truly
amazing to think about how, uh, how that has happened with globalization. You know, you got,
uh, uh, one company in this
country that, um, and another one in another country with a different aspect of it. Talk
about, uh, talent. We were just about out of time, uh, talent and institutions, but let's talk a
little bit about talent because China had the thousand talents program. And we saw this manifest
itself and a Harvard professor during the concerns about bioweapons and other things like that.
Talk a little bit about the U.S. versus China in terms of talent.
Yeah. So the last two battlegrounds are human talent and institutions, the organizations needed
to import AI technology and to use it effectively. And the U.S. has a tremendous advantage over China
in human talent because the best AI scientists and researchers from around the world want to come to the United States, including the best scientists in China.
So over half of the top undergraduates in China studying AI come to the U.S. for their graduate work.
And for those Chinese undergraduates who come to the U.S. for graduate school, who study computer science, do a PhD, 90% of them stay in the U.S. for graduate school who study computer science to do a PhD.
90% of them stay in the U.S. after graduation.
So the best and brightest from China actually come into the U.S. and they're staying here.
And that draw of top American universities and companies as a magnet for global talent is a huge advantage that China cannot compete with.
You got an anecdote about China and their chat program.
Talk about that, the China dream.
Oh, yeah.
So, you know, one of the chatbots in China,
Microsoft chatbot called Chow Ice,
said on a Chinese social media platform,
someone said, well, what's your Chinese dream?
It's a phrase used by Xi Jinping to talk about
sort of their version of of the American dream.
This chatbot says,
my Chinese dream is to go to America.
They're not
like that. They probably censored that chatbot.
I think that's why when you look
at soft power, I think that
having a climate of liberty and
freedom and prosperity, if we can
maintain those things,
that really I I think, is upstream,
you know, our overall system.
And that's really what concerns me when I look at talent, when I look at what is happening
in universities and other things like that, because we're starting to lose that kind of
freedom.
But talk real quickly, before we run out of time, a little bit about institutions.
So institutions are the last key battleground and it's institutions that are able
to take all of these raw inputs of data, computing hardware, and human talent and turn them into
useful applications. So if you think about airplane technology, airplanes were invented here in the
United States. By the time you got to World War II, they gave the US no meaningful advantage in
military air power. All of the great powers had access to aircraft technology.
What mattered more was figuring out what do you do with an airplane? How do you use it effectively?
The U.S. Navy and the Japanese Navy innovated with aircraft carriers, putting aircraft on
carriers, using them in naval battles. Great Britain, on the other hand, had access to aircraft
technology, but they squandered that advantage and they fell behind in carriers, not because
they didn't have the technology, but because of bureaucratic and cultural reasons.
And so finding ways to cut through government red tape, move faster, innovate, be agile
are really essential if the U.S. is going to stay in the lead and maintain an advantage
in artificial intelligence.
It's been fascinating talking to you.
We could go on a long time about this, but again, the book is Four Battlegrounds.
The author, as you've been hearing, is Paul Charest, also the author of Army of None,
and I don't know what that was.
But thank you so much, Mr. Charest.
Thank you.
Appreciate you coming in.
Thank you.
Thanks for having me.
Thank you very much.
And thank you, folks, for listening.
That's it for today's broadcast. Sharae, thank you. Appreciate you coming in. Thank you. Thanks for having me. Thank you very much. And thank you folks for listening.
That's it for today's broadcast. Has your news been censored, banned, censored, banned over and over again? Has vital information been held prisoner by mainstream and anti-social media?
It's the duty of every thinking person to make the great escape to thedavidknightshow.com.
There you'll find links to live streams,
videos, audio podcasts, and support links.
Live stream the show at DLive and every Monday through Friday, 9 a.m. Eastern.
Videos at Bitchute and Ugetube.
New audio podcast,
The Real David Knight Show
at Podbean, iTunes, Stitcher, iHeart, and more.
But even though there's a light at the end of the tunnel, without your support, the show will run out of gas.
The links to support the show are at TheDavidKnightShow.com to donate via Subscribestar,
donate via P***, or donate via P**** Cash App, Bitcoin, or
P.O.
Box.
Our sincere thanks to all of you who have stood with us to get this far.
Please don't forget to share the links and pray for the country as well as our family.