WRFH/Radio Free Hillsdale 101.7 FM - Nick Whitaker: A Playbook for AI Policy
Episode Date: July 31, 2024Artificial intelligence (AI) is shaping up to be one of the most consequential technologies in human history. How can the United States maintain its strategic lead in AI development, advance ...its national security interests, and ensure AI safety? A new Manhattan Institute report by fellow Nick Whitaker offers answers, presenting a primer on the history of AI development and principles for guiding AI policy. He joins Michaela Estruth on WRFH.From 07/31/24.
Transcript
Discussion (0)
This is Michaela Estreuth on Radio Free Hizdale 101.7 FM.
With me today is Nick Whitaker.
He is a fellow at the Manhattan Institute,
where he analyzes emerging technology policy
with a focus on artificial intelligence.
He is an editor and founder of Works in Progress,
a magazine of new and underrated ideas in science,
technology, and economics.
At Brown University, Nick Whitaker graduated with honors in philosophy
and a specialization in logic and language.
He recently published a lengthy report for the Manhattan
and Institute on artificial intelligence, and he's here to discuss it with us. Mr. Riddaker,
thanks for being here. Thank you for having me. I wanted to begin. I think a lot of people have
various ideas of artificial intelligence. Some people immediately think of chap GPT. Other people think
of robots eventually displacing humans, and I know it can be as simple as autocorrect. So I was
wondering if you could give a general definition of artificial intelligence and what your article
looked at. Yeah, I think the important definition of artificial intelligence
and especially artificial general intelligence
is mechanical systems or software systems
that could replace human cognitive labor.
So if someone said
AGI exists, come look at this product
that's artificial general intelligence.
What I would want to see is that that product
could essentially play the role of a remote worker
in the sense that it could email with you,
it could join your Zoom calls,
It could converse with you over the internet.
I think something like robotics would be an important topic on, you know, an important feature on top of AI.
But kind of AI itself would be something that can automate the cognitive labor that humans do.
So I think we've heard the term tossed around for a long time.
Growing up, you know, you'd say there was AI in video games that would play against you.
But I think what we're seeing recently is highly general systems like the ones you converse with on chat GPT that are able to answer a wide range of questions and queries across a wide range of topics.
So they're not just constrained to a single domain like earlier AIs that could play chess or could recognize images.
Yeah, wow.
Okay.
So going into that in your article, you divided it into the history of AI and then its future.
So where would you say the history of AI begins and why is it important to know how it progresses?
The history of AI that I'm especially interested in starts in 2010 with what's called the deep learning revolution.
So in the 2010s, they started training models across huge quantities of data,
quantities of data that previously were inaccessible or that we didn't have the computational power
to analyze. And because you could do this, you could actually create algorithms that were able
to sort of learn on their own by looking at lots of pictures, reading lots of texts, to sort of
create both a broad range of knowledge across subjects, but also seemingly sort of mental models
of how the world works from their reading, from their looking at text, from their watching videos.
In the early days, this was systems like AlexNet, which for the first time could have created
computer vision, where you could show the computer a picture and it could tell you, for example,
whether a dog was within that picture.
And this is something that had never been done successfully before.
People have been trying to do that since the 1960s.
And all of a sudden, undergraduate computer science majors are able to make these programs.
Now, I think from there, you first saw systems like AlphaGo, where AlphaGo was trained on a huge range of chess games and the game Go.
And from sort of watching these games unfold, it learned to play the games first at a high level and then began to kind of play itself in the game and got so good that it was best.
better than any human, decisively better than any human player.
Now, nobody really expected this to happen, especially in Go, which is an incredibly complex
game. Earlier, chess AIs had sort of memorized the best way to open a chess game and the best
way to end a chess game.
But this was a program that sort of gained what we might, you know, kind of intuitively think
of as an intuition of how to play chess just by sort of understanding which moves had the
highest probability of winning.
And in the late 2010s, you saw the rise.
of large language models, which were sort of trained on these vast, vast quantities of text.
And through this training, we're able to sort of predict, you know, the next word of text,
but also kind of evidence and understanding of the underlying ideas behind the text,
the underlying kind of world model that it would take to predict the next token of text.
And in 2019, there were systems like GPT2 that could do this, but, you know, would kind of
quickly get confused. They couldn't even kind of sort of count to 10 without messing up.
And a lot of things happened, but kind of chiefly, we both kind of scaled up the models.
We used more data and more computational power to train them. And second of all, our algorithms
got more sophisticated such that it could kind of more efficiently process that data and use
the computational power more effectively. And after a few years, you know, a relatively short
amount of time, you went from these systems like GPT2 that you might kind of intuitively think of
as a toddler that sort of can kind of string together a few words but can't really make sense
to a system like GPT4 where if you interface with it on chat GPT, you know, you could think
that it was a sort of smart high schooler that, you know, couldn't solve every problem but could,
you know, can solve difficult math problems, can translate between different languages, can
come up with interesting new recipes, can assist in, you know, difficult, uh,
programming problems can answer, you know, very technical problems in science. And, you know,
I think looking at this trend, a lot of people wonder, you know, is this sort of the end of the
road or is it going to go further? And just based on the opinions of researchers and major labs,
of CEOs of major labs, of kind of the different underlying trends, I think it really could go
much further. And I think in the next, you know, three or four years, we could see programs that
instead of being something like a smart high schooler could be more like sort of a PhD level
knowledge, understanding, intelligence. And also, these systems won't just be deployed in chatbots,
but will also become, you know, what's called agents. So you'll be able to give them a task,
like help me, you know, create the software program. And rather than just kind of spitting out
the first answer that comes into the program's head, it'll actually be able to kind of think,
work, iterate, just like you would do if you were trying to solve a problem like that, and
maybe in 30 minutes come back to you and give you a really, really good answer and also
kind of operate relatively autonomously while doing it. And I think this could really change a lot
of things about society quite quickly in ways that people and especially people in the political
world aren't fully appreciating right now. Wow, that's fascinating. So, excuse my ignorance,
But with AI, if it's something that is being programmed by humans, is it possible for it to exceed human intelligence or since it's being programmed by us, it does eventually have a limit?
So I think this is one of the really kind of interesting questions right now.
The first thing I'd say is that we don't quite program AIs in the traditional sense that we kind of, you know, lay out a steps of instructions that the program follows.
Instead, what we do is we kind of, we use these tools that are able to process data and
sort of by processing data are able to kind of learn on their own.
So they learn things that we don't always know they've learned.
And usually this is just kind of like little interesting things like, you know, GPT4 was
never taught to play chess.
But when someone tried to play chess against it on GPT, they realized that it was quite a
sophisticated chess player.
And there are already lots of ways that, um,
you know, even a system like GPT4 is kind of smarter than any of us.
You know, it speaks every language.
It, you know, knows more knowledge than almost every human does.
It's like the, it's already kind of read every book, so to speak, in ways that, you know,
humans haven't.
But I think that's not all intelligence is, right?
Intelligence is kind of your ability to create new ideas, to, you know, have like novel insights.
And I think how we might think of AI progressing from here now that it's sort of, you know,
kind of processed all the data or just about all the data that we have available is that kind of
step two. So the program AlphaGo after it had sort of, and I'm speaking loosely here, but after it had
sort of watched all the chess games, it just started playing chess against itself. That's the sort of
the second step that took it to become a much better chess player than any human. And the question
that a lot of machine learning researchers are grappling with right now is could you do the same thing,
not just for chess, but sort of for everything.
You know, will AIs be able to sort of argue against each other?
Are they going to be able to, you know, like say one AI works on a math problem,
and another AI checks the math problem?
And you sort of have this recursive process where by kind of going back and forth,
it's become smarter than humans across a wide variety of domains.
And I think this idea, what people often call super intelligence,
is kind of a very key question for, you know, the future of AI and how we
deal with it. This is Michaela Estruth on Radio Free Hillsdale 101.7 FM. With me today is Nick Whitaker,
a fellow at the Manhattan Institute, where he recently published a report on artificial intelligence,
and he's here discussing it with us. In your article, you talk a lot about the future of AI,
especially in regards to national policy or foreign policy. So what benefits can AI bring to our
national security and what are some precautions that you think we should be implementing?
So I think it's hard to know exactly how, you know, AI will be deployed in a military context.
But I kind of sketch out some ideas that I think are instructive just because I think that
there's enough kind of possible examples, not to say that every single one I lay out will
necessarily come true, but I think a lot of them will be used.
So I think the simple ones is that you'll have a system like, you know, chat GPT, which is,
you know, a program that uses GPT4, but it'll be much smarter in the future.
So rather than just being able to sort of help a student with their homework, it will be able to help someone develop new weapons.
It'll be able to help someone plan battles.
You could imagine the kind of generals consulting AIs like, you know, GPT, you know, let's just say six.
And that program will help the generals in kind of their command of the military.
But it doesn't really stop there.
It could also be that, you know, these AIs are plugged into weapons systems such, you know, like drones so they can become autonomous and operate on their own.
own, such that they can kind of navigate, plan, and execute attacks relatively autonomously.
And then I think there's this third category of kind of, you know, research and development.
The nations that have advanced AI will have all sorts of economic benefits, both in terms of
speeding up processes within their economy and also, you know, being able to develop new
weapons systems, even weapons of mass destruction that we've never seen before because it would be
like, you know, having the Manhattan Project equivalent helping you discover the next kind of
decisive weapon. So, you know, I think a lot of these possibilities are worrying. I think that
the U.S. needs to stay on the cutting edge of AI. But I also think we need to understand the capabilities
of state and non-state actors, you know, are kind of geopolitical adversaries and terrorist groups
in terms of how they might employ AI in their attacks. And I think there's sort of a huge range
of things we need to do there from ensuring that this sort of physical technologies needed
to create nuclear weapons and biological weapons are defended against terrorist groups.
And also that we do things to hamper the development of AI in places like China, where we don't
want them to have more advanced AIs than we do.
So we can do things like restricting the flow of chips to countries like China that are used to train
AIs.
And we can make sure that our labs at home are secure such that the kind of secret technique
that are used to create AIs aren't just being continuously leaked to the CCP.
Yeah, actually following up on that, do you see hacking issues increasing with the progression of AI,
or do you think they will decrease as the technology gets harder and harder to crack, basically?
In the near term, I think there's going to be kind of an unprecedented amount of hacking and cybercrime enabled by AI, and I think this is a very worrying thing.
We've already seen this.
There's been reports from both kind of the FBI and NSA that we've seen an uptick of sort of not this super scary, you know, shut down, shut down the public infrastructure kind of cyber attacks, but just stuff like using deep fakes to scam people on phone calls, fishing attempts where you get an email from a false source, you know, claiming to need money or something.
These are already being used by AI.
But I think if AI gets better, you could imagine that it's able to, you know, sort of assist in major national, national cyber warfare efforts such that it could, you know, shut down critical infrastructure and lots of other things like that.
I think later, you know, there might be a chance that AIs are able to kind of reinforce, you know, friendly AIs are able to reinforce our cybersecurity infrastructure.
We're more secure than ever.
But there's just this question of whether, you know, what's going to come first and how will that change the balance of offense and defense?
And I think a key priority for us is to stay on the cutting edge of AI such that we're able to shore up our defenses before hostile actors are able to make sort of strikes in the cyber realm.
Okay, so kind of drawing an analogy to the recent Microsoft, I know it was a technological failure. It wasn't a hack or anything. But do you think it's possible for us to become too reliant on AI, kind of like we've become reliant on technology? And then if it doesn't work, if it's,
fails us, our systems kind of malfunction?
Yeah, I definitely do.
And I definitely think that in terms of autonomous systems, in particular, there's a very
worrying possibility that there will be unintended consequences of, you know, it's sort of
autonomous systems.
There's sort of different reasons this could happen.
There's one sort of classic kind of sci-fi reason that I actually think is quite a real
possibility where you sort of ask the AI to do one thing and you're for some reason
kind of misinterpreted and it does another thing by sort of overgeneralizing your
request. And really, we need sort of new engineering, new machine learning techniques to ensure
that this doesn't happen. Currently, we sort of ensure these things don't happen by grading
the outputs of AI in a process called reinforcement learning via human feedback. So very simply,
you know, you look at the output of what an AI does, you know, does it answer your question
well, you give it a thumbs up if it does, and you give it a thumbs down if it doesn't.
And this is something that labs are doing to make sure that these systems in general give,
you know, helpful and useful advice.
But I think the problem is if AIs were coming up with, you know, answers that were so complex
that they were very hard for humans to evaluate, there's this new question of how do you evaluate
those questions.
Do you have another AI do it?
You know, maybe, but that has problems of its own.
So I think this is a very central area of research, you know, figuring out how to control
more and more advanced AIs to ensure that unintended consequences.
consequences don't arise from their use.
What else do you think about balancing the progress of AI and then also maintaining human
involvement, kind of just, I guess, speaking in general with our economy, like, will it replace
jobs?
Will it almost like displace us?
A lot of people kind of cry that in fear.
And I was just curious your thoughts on that.
Yeah.
I mean, I think this is a really difficult problem.
So with most technology, what happens is that it sort of complements.
human labor. It frees up some part of human labor such that humans are able to go on to do other
stuff and often those are sort of more creative, more fulfilling jobs and technology has in general
sort of automated the very rote and not fun parts of life. So just for example, you know,
in the 1970s, I believe in every state secretarial work was the most popular job. And now it's not.
And that's not to say that there's no more secretaries there often are. But for many people,
computers have automated a lot of the tasks that you traditionally assign as secretaries,
like word processing, sending messages, planning travel. So if AI rolls out like that,
and you kind of have AIs that are powerful, but they can't do everything, you know,
there will be lots of new jobs for humans. And a lot of those new jobs could be creative.
You can imagine a new age of, you know, great carpentry and great artwork. But I think there's
also this possibility in kind of the true sort of definitional sense of AGI that it's able to
literally do everything the humans can do. And I do think this is a very worrying possibility
in that it kind of reshapes our relationship with work and kind of raises these very deep
questions of, you know, what should the human experience be and what should human flourishing
mean when we don't sort of need to do work in the same ways that we did before. I think it's
sort of too early to tell which way AI is going to go on these, on these two paths. But I, you know,
in my report I mentioned that I'd love to see kind of yearly reports from the White House tracking
the extent to which that AI is displacing jobs, what new jobs are being created from AI,
and really getting a sort of bird's eye view of this kind of economic transition that we're going through.
Also, in your report, you talked a little bit about immigration and talked about how it was actually
crucial in AI development. And I was wondering if you could unpack that.
Sure. So I'm sure we all have, you know, different opinions about kind of the immigration issue in
general. I know I have my opinions on, you know, how the U.S. broadly should think about the
question of immigration, both low-skilled immigration and high-skilled immigration. But I think on this
specific issue, there's sort of a kind of unique geopolitical reasons to be in favor of making
sure that the U.S. has the best AI engineers. And I think the case is pretty simple. You know,
during World War II, during, you know, as kind of the Manhattan project was forming, a lot of the best
physicists in the world were leaving Germany, often they were Jewish, and leaving the constant
of Europe in particular and sort of all converging upon the United States. And that left the U.S.
in this amazingly powerful position from the standpoint of technological competition in World War II,
where really kind of all the greatest minds of the world in physics, you know, but for very,
very few were both in the United States and eventually converged on Los Alamos. And I think there's a,
There's a very similar situation with AI where a lot of the best AI researchers are American.
I think probably about, you know, for the 1,000 people that are really leading AI, I think about half of them are American.
But, you know, many of the top graduates are, you know, are from places like, you know, India and China and Europe.
And I think we should take advantage of this talent because that not only kind of supports the U.S. advantage in artificial intelligence, but it also weakens the advantage of some of our geopolitical adversaries.
And there's a big problem here, which is that, you know, if these people have families back home, you know, kind of coercive governments can force them to either, you know, spy on U.S. companies or leak material. We've already seen this happening, happened before. There was an engineer named, I believe, Linway Dang, who was a Chinese Google employee who stole, you know, about 500 kind of key Google AI secrets and sent them and sent them back to China. And he was arrested in California a few months ago.
But I think there's like a real opportunity for a compromise here where we're able to ensure that these people are sort of loyal to the United States and aren't sort of doing, you know, conducting foreign, for an espionage.
And then we actually use them to work with American labs and American efforts to, you know, build AI systems that are able to benefit all of humanity and are able to protect the national security of the United States.
Yeah, I appreciated that analogy back to World War II.
I didn't think about that. That makes a lot of sense.
Well, Mr. Wittaker, those are all the questions that I have for you today.
Is there anything that my question is in an address that you wish to say?
So I think there's been a debate about AI and DC.
And the question has been, you know, should we regulate AI or should we not regulate AI?
And I think there's some things, you know, let the use of, you know, deepfakes to manipulate elections or like the use of deepfakes to spread pornography that are kind of clear-cut case.
for regulation. But really what I'm interested in, and I think the report is interested in,
is sort of creating early warning mechanisms within government, such that if AIs do continue
to progress rapidly, that people in the United States government and the population at large
are able to understand what's coming and are able to take evasive action and are able to
anticipate it. So I'd love to see things like, you know, AI is being evaluated to see how
powerful they are within the government. And I don't think this requires regulation. And I don't
think this requires, you know, highly sort of partisan measures. But I think it's really important
that the people that know the most about what's coming for this technology that could be a key
geopolitical technology and key economic technology. I think the only people right now that really
know what's happening are in San Francisco at AI Labs. And I'd love to live in a world where we sort of
all had a more broad understanding so that we could kind of choose as a people, how we wanted to deal with
it and how we wanted to use it. So I think we need to sort of cut through this idea that the only
question is whether to regulate or not regulate, but really think about this as a matter of preparedness
such that if things, you know, do change rapidly, we're in a position and we're on the ball
to deal with it. Yeah, that makes a lot of sense. Well, Mr. Whitaker, thank you so much for
sharing your expertise and coming on the air. Thank you for having me. That was Nick Whitaker,
a fellow at the Manhattan Institute, where he recently published a report on artificial
intelligence. And I'm Michaela Estruth. You're listening to Radio Free Hillsdale 101.1.7 FM.
