Angry Planet - The US Government’s AI Grand Bargain
Episode Date: December 6, 2025The White House is portraying the race to adopt AI as an existential crisis. It’s the next Manhattan Project, they say, a technology so important it will require an unprecedented build out of energy... infrastructure and massive data centers. But the Manhattan Project was a government-led technological drive whereas AI is led by salesmen and corporations.What could possibly go wrong?On this episode of Angry Planet, Ben Buchanan is here to tell us about the government’s role in fostering AI. Buchanan was an AI advisor during the Biden administration where he helped write the policy that paved the way for private-public partnerships between DC and AI companies. Now he’s a professor at John Hopkins and, though he’s still an AI advocate, he’s got concerns. Slop, public land use, and autonomous weapons. We get into it all on this episode of Angry Planet.AI as an arm’s raceNukes are cheaper than AIGovernment’s role in the construction of AI infrastructureWhat are the stakes of the AI competition between the United States and China?“More powerful AI systems will enable more powerful cyber operations.”“It’s the hardest thing we do as a species.”Turning over federal lands to data centersHow Trump is shooting himself in the foot regarding AI“We’re just chasing power all across the country.”“We’re going to be building data centers for a very long time.”How the AI expert uses AI“There’s a long list of concerns.”Accident reports and autonomous weaponsThe AI Grand BargainBen BuchananDOE on federal lands for data centersAnthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?DoD Direction 3000.09 Autonomy in Weapons SystemsSupport this show http://supporter.acast.com/warcollege. Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
Love this podcast.
Support this show through the ACAST supporter feature.
It's up to you how much you give, and there's no regular commitment.
Just click the link in the show description to support now.
Hello and welcome to another conversation about conflict on an angry planet.
I am Matthew Galt.
And today, we're going to talk about AI and the grand bargain that.
that I would say all of us in the country are being asked to make about AI.
And to do that, we'd have with us, Ben Buchanan.
Sir, can you introduce yourself?
Thanks so much for having me.
I am Ben Buchanan.
I'm currently the Dmitri Alperibich assistant professor at Johns Hopkins University
School of Advanced International Studies.
And prior to that, I was in the Biden White House for all four years,
including as President Biden's special advisor for AI.
How does one become an AI expert at that level?
Well, the field is so new, at least in its national security implications, that there's no set path.
My path was through the world of cyber operations.
My Ph.D. was on the way in which nations hack one another and the national security and geopolitical implications of that.
I was a professor at Georgetown University, and it was during my postdoc and PhD that I started also getting into
AI. He wrote a book on AI, National Security, and then was asked to join the Biden administration
in 2021 and served in a number of roles at the White House.
All right. So you've just co-written this piece in Foreign Affairs, which is what we're here to
talk about. The AI grand bargain, what America needs to win the innovation race, which I read,
you know, I cover AI in my day job that's kind of aside from this. I've got like two big pieces
that have just come out about it.
And I'm pretty interested
and invested in this idea
that it's being characterized
as an arms race, AI.
The Department of Energy is called
the Manhattan Project
and that metaphor has kind of stuck.
Do you see AI as
an arms race between us and China,
I think, are the main players
in that metaphor?
And is that a useful metaphor?
I don't see it as an arms race.
I think I see it as a competition.
We could talk about different components of the competition,
but I think the notion of a race, especially to a particular point,
is not particularly productive, and especially an arms race has escalatory overtones
that I don't think are appropriate or required in this context.
Yeah, I'm glad you say that because I find the Manhattan Project metaphor
not a good one, because at the end of that road, you get devastating weapons that change
the way the world works. And it seems to me that AI, the kind of AI we're talking about
is very different than that, right? Yeah, I do not think I've ever used the Manhattan Project
metaphor. I think probably the only scale, the only place where the metaphor feels apt is at the
scale. And even if you adjust for inflation, the amount of money being spent on AI is more than we
spent on the Manhattan Project, which was actually surprisingly cheap relative to some investments
today. But no, for the reasons you mentioned, I don't really use the Manhattan Project
metaphor. And I think, in fact, the disanalogy is important here, too, which is, as we argue
in the piece, AI is the first revolutionary technology probably for 100 or so years. That doesn't
come from the U.S. government. The Manhattan Project is, of course, the revolutionary technology
that most canonically comes from the U.S. government. So I think in many respects, it is the
disanalogy with the Manhattan Project that makes us such an interesting and important topic.
All right, let me step back and get, ask something that I ask everybody who comes on to talk about
AI. Define it for me. What do you mean by artificial intelligence?
Artificial intelligence, particularly in its current instantiation of machine learning, is
using machines to learn patterns from data with a learning algorithm and a lot of computing
power, and then using what those machines have learned, those patterns, to make predictions
or give classifications about new data or new problems to which we don't have the right
answer, and then to make progress on those new problems.
So we have a two-step process here of training neural networks to make discoveries around the world
or to embed in them, discoveries around the world,
and then using those trained AI systems
to go out and do something useful for us.
Big pattern recognition machines.
Yeah, essentially.
I think, you know, I don't discount the possibility of discovery,
and I think I reject the notion that these are just parrots.
I think there's more going on than that.
But I think in the machine learning moment,
which is the AI paradigm that's dominating right now,
the way they work is they are,
meant to find the patterns or the structure in data and then enable us to hopefully do something
useful with what they found.
All right.
So big picture, AI is kind of at this moment.
The AI industry in America is at this moment where it's facing a set of unique challenges.
I think that your piece kind of argues it needs, it would behoove the U.S. government to
help them with.
Is that accurate?
Yes.
I think I think the grand bargain in the piece is that there,
There are challenges the industry faces that the U.S. government can uniquely help with,
and there are challenges the U.S. government faces that industry can uniquely help with,
and it is in the U.S. national interest for both sides to help the other.
What are the challenges that AI is facing?
The AI industry.
The AI industry, yes.
I think one of the big ones is power and permitting, and it is a priority for me that if we're going to develop AI,
we should develop it here in the United States.
I think it would be a strategic mistake to cede that development infrastructure overseas.
And AI systems require a lot of power, and they require permits and the like.
And I think the process for building that power and for doing that permanent can be slow and cumbersome.
And this is the case where the U.S. government can help.
Crucially, I know my view on this is extremely strong, I do not think it should be done on the basis of rate payers and the like.
So I think the companies need to be bearing the bill for this, unambiguously.
But I do think this is a case in which the companies, no matter what checks they cut,
there is a mechanism for power and permitting that requires pieces of a system outside the companies to work,
and the U.S. government can help there.
You're talking about, when we're talking about power infrastructure, do you think nuclear is the way to go?
I'm worried about how long it will take to build nuclear power.
So I think probably we have to have a not quite an all-the-above approach,
but I think nuclear is a piece of it.
Small modular reactors and the like, I think, are probably on a five-to-seven-year timeline
at the most ambitious and optimistic.
In the near-term, solar and batteries have huge potential, which is great.
So I think it is not going to be quite in all-the-above strategy for power,
but it probably needs to be something close to that.
I think nuclear has a lot to offer.
I'm worried about the timelines for nuclear of even at the most ambitious small modular
reactors or something like five to seven years.
I think it's great we're doing that.
I think Amazon has made big investments there.
I think Microsoft has as well.
I think that's terrific, but we're going to need stuff before then.
Solar and batteries are important.
I'm also bullish on things like advanced geothermal and the like.
In an ideal world, what I would love to see is that the companies, some of which are willing
to pay more for clean power, can use their desire to build power.
to catalyze, you know, be the early adopters for technologies like advanced geothermal
where the economics aren't great right now, but it might get a lot better.
How fast.
So so many things are happening right now, and it does feel like it's changing every week.
There's an overwhelming economic drive to get these data centers built and to power them.
But like you said, nuclear, to build a new nuclear reactor, a traditional one, not even talking
about these small modular ones, you're looking at like 10 years.
years.
And they're doing things to try to speed that up.
We'll see how well that works out.
But like what happens if in five years the bottom of this drops out?
And there's not as much of a demand for this technology as we thought there was going to be.
Or do you see that as a possibility at all?
I think it's, of course, a possibility.
I don't think America will be made weaker by building lots of power.
And I think it's a kind of robustly good thing to do for a lot of reasons.
So I can't really imagine us sitting here in a few years and saying, well, we regret building out lots of power.
If you look at things like electric vehicles and heat pumps and advanced manufacturing in the United States,
there's a lot of reasons to think the power demand is going to spike really, really high over the next few years.
Again, I think electricity prices are already spiking.
So I feel actually more confident that building out the power is a robust.
good thing for America to do if you want to make a case around or building data centers
too aggressively, producing too many chips or whatnot. That's a little more nuance there. But on the
power side, I don't have a ton of concern. I think sometimes about the creation of the Tennessee
Valley Authority in the 1930s, which is a big Roosevelt project. And Roosevelt took an
unbelievable amount of flack for doing that. And it turns out it actually was absolutely the
right decision. People said, what are we going to do with all this electricity?
doesn't need all this electricity, and a huge portion of the American war effort for World War II
came out of the Tennessee Valley Authority. That's why my understanding is that's why Oak Ridge,
Tennessee is in Tennessee, and the nuclear program was based in Tennessee because it required
a lot of power to do that and bomber manufacturing and the like. So I think history generally
shows that nations don't regret building an electrical infrastructure. Or massive infrastructure
projects in general, right? They tend to be, they tend to be jobs.
creators, they tend to show people that the government can do something for once.
You know, it takes a while to come to fruition, but often on the other end of it, it ends up
looking like a good idea.
Yeah, I mean, there's the canonical, like, bridge to nowhere exceptions, where it's like,
we're just wasting money here.
But in general, I am in favor of infrastructure build out in the United States.
And the advantage here is that the companies are affuting the bill.
It's not the government freeing the bill for a lot of this stuff.
So I think there's ways in which the government could streamline the build-out,
could make it easier and so forth,
which I think has good arguments to do.
But I think a huge advantage here is it will not be the government fighting the bill,
unlike the Tennessee Valley Authority, unlike some historical examples.
So kind of outline for me what you see is the stakes here.
Well, I believe AI systems are going to continue to get,
very good. I think they're very good today. They're going to only get better. And I think
the applications of AI to national security are fairly straightforward. There's an economic
dimension of this, too, that's important in the United States. I will set that aside for one
minute, because this is an angry planet. We're talking about conflict, but we can come back to
the economic side if you want. But on the national security side, we can think about things
like the application of AI to cyber attacks, to cyber operations. And I think to me it feels pretty
straightforward that more powerful AI systems will enable more powerful cyber operations.
And insofar as cyber operations are one of the, you know, core muscle movements of
intelligence collection and national security, which they are, then that's a pretty clear
line that I think is, you know, very easy to draw. You could draw in other areas as well,
AI's applications for material science, AI's applications for intelligence analysis,
AI's applications for weapon design and so forth.
So I think there's a lot of them,
but cyber operations is probably a good place to start.
And why is China the person to be worried about
or the country to be worried about?
How far ahead in the AI game is America versus China?
China is a nation that we need to take seriously as a competitor
for a bunch of non-AI reasons,
the size of their economy,
military, their ambitions to be a revisionist power and so forth. So I think there's non-AI
reasons to take China seriously. Once you're taking China seriously, then you're saying, okay,
well, how could AI change the dynamic here? And China is the only nation that could come
close to the United States in AI. They have a tremendous talent base. I think their talent is
extraordinary. They have tremendous energy infrastructure. Their energy infrastructure really
far surpasses ours.
Their capital markets are not as good, but you can have a government that's able to put, you know, $200 billion into things like chip manufacturing and the like.
But where they really struggle is the chips.
And even with all the other investment, they really struggle to make the chips.
And that has meant that the United States has a significant lead over China, especially for systems that are trained on each country's stack.
So there really are not competitive Chinese systems trained on Chinese chips.
even the best Chinese systems like Deepseek so far have been trained on smuggled American chips.
So that's, I think, a really significant advantage for us.
What do you think of the state of their software solutions to hardware problems?
I am not sure there are enduring software solutions to hardware problems.
I think they're really good at finding algorithmic efficiencies, but so are American companies.
I don't know that they're better.
Deep Seek's better at marketing their algorithmic efficiencies.
maybe than American companies are,
but I don't know that Deep Seek is better at finding those efficiencies than American companies are.
And I think at the end of day, it is a core fact of AI,
that computing power is the fundamental, indispensable pillar on which everything else rests.
Do we have any idea, like, what's the stopgap for them?
I mean, obviously, it's like, it takes a while to make a fab to make these silicon chips.
There's very few places that are doing it.
One of them is in Taiwan, which, you know, makes that obviously a point of geopolitical concern.
What is stopping China from building its own fabs using its overwhelming, like, top-down approach and just, you know, churning out GPUs at an unprecedented scale?
It's the hardest thing we do as a species.
And China in 2014 decided to do exactly what you just said, which is China said, we're importing.
north of 80% of our chips in 2014, long before any of us were doing AI.
This is a real problem for our country, it's a real weakness.
And China began a $200 billion investment program, public and private money, to do exactly
what you said, build out chipmaking industry, build out fabs, and the like.
And they largely failed.
And the Trump administration, in its first term, made things harder by cutting China
off advanced chipmaking equipment.
In the Biden administration, we went substantially further, cutting China off chips,
and also another generation of chip-making equipment.
But the reality today is, despite all the Chinese investment for more than a decade at this point, Chinese chip-making is terrible.
And you don't need to take my word for it.
You can take the Trump administration's word for it.
Secretary Lutnik testified before Congress earlier this year that in 2025, China will make something like 200,000 chips.
The Democratic supply chain is probably making around 10 million chips.
Our chips are better than the Chinese ships, substantially better.
The Chinese chips are like 40% as good as an Nvidia chip.
So if you sort of sum up the raw computational power being produced by each nation
or each Democratic and autocratic supply chain,
it's probably something like 100 to 1 in favor of the democracies,
depending on how you count maybe 50 to 1, 100 to 1 in that range.
So it really is just a huge computational advantage,
despite hundreds of billions of dollars in Chinese effort,
a huge computational advantage for the United States and our allies and partners.
what is it about it that makes it the hardest thing we do as a species it is an incredibly precise technical process that unfolds at the nanometer scale if you look inside an extreme ultraviolet opography machine that the chip making machine it's something like 10,000 times as pure a vacuum as spaces you're talking about a laser that is so accurate that it would be akin to hitting a golf ball on the moon from earth
that laser fires just like one two punch 50,000 times per second, which hits molten tin moving
through the air at precisely the right angle twice. And that process produces radiation in just the
right nanometer scale frequency, which creates the right pattern on a silicon wafer that's moving
at fast speeds through the machine. That's like the simplest possible explanation is what I just gave.
And this is just an incredibly complicated process.
To be provocative, it's worth noting basically America can't do it, despite our own significant investments in this.
And Intel, you know, the stalwart American chipmaker has failed to do this.
So we are very lucky that it is a democratic supply chain, but that is because of TSM and Taiwan, which you mentioned, which has worked this out.
But this is, I think, definitely the hardest thing we do as a species.
If only Texas instruments had promoted that guy.
Morris, Morris Chang, the founder of TSM, as you rightly note, was a rising star in Texas instruments and failed in his effort to become CEO, went to New York, failed at a company in New York, and then at age 56, moved to Taiwan and started TSM, which is one of the most impressive companies in the history of humanity.
All right.
So what are America's efforts to build its domestic fabrication?
How's that going?
In the Biden administration, we passed something called the Chips and Science Act,
which invested $52 billion in that here.
I take no personal credit for it.
Other folks worked on it.
I think it has had some success.
I think it's primarily had some success working with TSMC in Arizona,
some with Samsung as well.
But this is a work in progress.
And as a percentage of overall chip making in the world, the amount produced in the United States is not terribly high.
It's growing, which is good.
I think it was a really smart bipartisan law, fully supportive of it.
But we got a long way to go.
What do you see as the, if you had your druthers, if you could create the kind of private public partnership that you think we need, what does that look like?
It's a great question, and I don't know that I have a single answer.
I think we're going to evolve towards something, but where I would begin is probably a place
where we began with President Biden's AI Infrastructure Executive Order, which is making
federal lands available for advanced AI training and building power for advanced AI training
with the companies bearing all of the cost for that buildout, but with us using
federal lands as a mechanism to provide the land and build it quickly. They can compensate the
government for the land and to also build the power to enable the data centers. And then on the
back end of that, setting security standards for those data centers and ensuring U.S.
government access to the systems that are trained there. That would probably be the starting
point of a deal. What do you make of the Trump administration's efforts so far? There's been,
there's like three executive orders on May 23rd. There's been other executive orders.
that are not only paving the way for doing some of what you're talking about,
but also kind of reforming the Nuclear Regulatory Commission
to make it easier to build nuclear power plants as quickly as possible.
It seems like there is a push in this direction.
I think there's a push in this direction in some areas.
The nuclear stuff, I think, is a good example of that.
I think in other cases, the AI rhetoric has been good,
but has been undercut by the actual actions.
So take energy, for example.
President Trump talked during the campaign about the importance of energy.
I think he even talked about energy for AI.
And this was an area where I had a lot of hope with them coming in that they could pick
this up and on a bipartisan basis really expand the amount of energy production here in the United States,
which is good for a lot of reasons, including AI.
They have not done that.
They have, in fact, been very aggressive at canceling energy projects,
taking us in the wrong direction, canceling 5 gigawatt solar farms.
Nevada, canceling 700 megawatt offshore wind in Rhode Island, canceling five gigawatt
transmission lines in the Midwest. And I am worried that that is going to undermine our
competitiveness in AI. The Trump administration science director testified that we should run
data centers on coal-fired power and like. I just don't think that's realistic. So I mean that
less as a political statement and more as a math statement. That is not going to provide enough
power for us to do this seriously. And I would love it if the Trump administration lived up to
what it said it was going to do on energy and the AI action plan, for example. It has not yet
done that. Maybe this is in the weeds, but I am curious. Can you walk me through the math
of why coal was bad? I think my understanding is the amount of coal you would need to scale to
the gigawatts and gigawatts that we're going to need for AI is just not.
tenable. America's not producing that. And, you know, for climate reasons, I think it's a good thing
we're not doing it on coal. But if we're talking about a really aggressive buildout in the next few
years, you're going to be talking about things like solar, geothermal, certainly some natural gas,
and then eventually transitioning to nuclear. How much public land do you think we're talking
about? Essentially rounding air relative to the amount of
of land that the Department of Defense and the Department of Energy and the Department of Interior
have. So, you know, I think you're talking square miles of land, but not thousands of square
miles of land. And, you know, we have huge military installations that are used for all sorts
of activities that are much, much bigger than anything we're talking about here.
So something that I've been kind of interested in that I think is going to be a budding
problem for this project over the next five years is.
local resistance to the build out of data centers. People generally, I think, are happy to have
more power come in. But it's the data center specifically that local communities are not
happy about. They're noisy. They suck up all the water in the area, and people's energy
bills are going up. And you can build the data centers faster than you can build the power
infrastructure, right? So the data centers are getting built.
Yeah, I actually have a lot of empathy for the communities on this.
I think the water thing is less of an issue now with closed-loop systems, but the energy thing is real.
The noise thing is real.
Part of the argument for doing it on federal lands, especially likely out in the West, is that you are not doing it in communities.
I live in Northern Virginia.
It's a huge local political issue here, data centers and electricity prices and the like.
So my objective...
One of your state reps basically just won their seat only on this issue.
right? Yeah. And again, I think part of the reason for doing it on federal lands kind of out in the desert is to not place them in communities. I do think, as I said before, a really important principle here is no matter where it's done, the companies need to bear the costs of doing it, and they need to bear the cost of the electricity increases in like. So one of the things that we began to work on, though, it was still not as big of an issue when I was in office is what's called special tariff rates and the
So ways in which companies can say we are going to pay more per unit of electricity such that the ratepayers rates don't rise.
And I think it's a very important principle.
And we say explicitly in the piece that should be a fundamental part of any kind of bargain.
But yeah, I have a lot of empathy for local communities on this.
And I think we should design a solution that doesn't needlessly impinge on those communities and certainly doesn't put costs on them.
explain the closed loop water system
there's an older design of data centers that
uses water and then sort of dumps the water and gets new water
folks are now moving towards data centers that have a closed loop
so it uses the water multiple times so there's a fixed amount of water in the facility
that is cycled again again for cooling
and that is where
data center designs at least as i understand it
that are already are and are continuing to head.
Do we have any idea which one's cheaper to build?
I would not claim expertise on that.
My understanding is that the net costs of closed loop are better,
or the actual construction, I'm not, I'm not sure.
All right, fair enough.
What can be done about the, I mean,
other than building it out in federal lands,
is there anything that can be done about the noise complaints?
I do not claim expertise on data center construction.
I think our view was, I think our view was we should not put them in communities that don't want them, kind of full stop.
And the nice thing about the huge tracks of Department of Interior land in Nevada and Idaho and places like that is there are not built communities there.
We're using a lot of that land already, or we've earmarked a lot of that land already for military and Department of Energy activities.
Historically, it was used in the Cold War for things like that.
Some of it actually has power infrastructure already built there.
We need to revitalize it, but there's the transmission lines and the like already there
from historical application.
So we felt like there was a real opportunity there.
And to their credit, this is a place where the Trump administration has continued our work.
And they've identified, I think, 10 Department of Energy sites and the like.
So while I have concerns about their other energy policies, they do seem to have bought
our vision for how to do this on that piece of it.
Why do you think so many of these data centers are being built out near communities?
Is it because it's by private corporations that don't have access to the public land?
They're chasing the power?
Yeah, because that's where the power infrastructure is.
Exactly.
That's my understanding.
Again, I've never worked on the data center construction side.
But my understanding from when we were talking for them is, the very memorable phrase.
I remember in 23 or 24, I heard from them, they said, I asked, how do you pick your sites?
And they said, we are just chasing power all across the country because power is such an issue for the United States.
And that's why I get really worried when I hear about us canceling power transmission lines and power generation projects because we're going to need every single electron.
Well, you know, solar energy is woke and we simply can't have that.
Yeah, the sun, I think, predates all of us.
And it's a great source of energy.
And, you know, solar is not the only answer.
There's a lot of other things we could do that would supplement it.
but it's hard for me to imagine us addressing our energy needs on any front, including AI,
without a lot of solar.
And this is the case where China is just really showing the world how it's done.
The stats are just incredible of China adding, you know, in six months more solar power to its grid
than America's ever added in its history.
And China putting on something like 90 gigawatts of solar in a single month, you know.
So it's just like one step after the next for Chinese solar.
I think something like 80% of Chinese solar panels are built in China.
So just remarkable accomplishment from the Chinese side on the energy buildout.
And that's one of the biggest stresses I have.
Is that they're better at the big infrastructure projects, specifically energy, right?
Exactly. Exactly.
And if they ever were to get the chips, they would have the power to run them.
We have the chips, which is great.
It's the most important thing, but we need to build out the power to make sure we can use them here at home.
Can we talk about nuclear stuff for a little bit?
Sure, though I don't claim a ton of nuclear energy expertise, but yeah.
What do you know about the small modular reactors?
People in the White House who knew a thousand times more than I did about nuclear were optimistic on them.
But they did say you're looking at, you know, five to seven years kind of minimum on that.
if other folks who know more about nuclear than I did had more aggressive timelines, I would
love to hear them, but I'm not in a position to adjudicate myself.
But it was, I think, pretty clear to us that if we're looking at planning for data
center buildouts in 27, 28, that kind of stuff, nuclear is probably not going to be a big
piece of the puzzle on that kind of two or three-year timeline.
Yeah, but they're, all right, let me push back on that because it does seem like,
well, I agree with you that that's true.
knowing what I know about nuclear power
the tech companies that are building the AI
are certainly
betting that nuke is going to be a huge part of the equation
I think that's right
and I think it will be just not on a two or three year timeline
and we're not going to stop building data centers in 28
we've got to build data centers
I think we're going to build the data centers for a very long time
so I am very glad folks
for making those kind of investments I'm very glad
we have companies pursuing fusion
I don't claim enough to adjudicate which companies are doing the right nuclear bets,
but when a company came to us and said,
oh, we're going to do more on small modular reactors,
my reaction was that's a good thing.
It can't be the only thing, but that's a really good thing,
and I'm glad we're allocating capital towards that.
I think Microsoft is the most interesting because it's putting money on all the different new bets.
The smartest thing I think they're doing is opening older React.
and like refurbishing them like going into three mile island and turning on that that that tower again it's like you don't have to wait 10 years to to see the returns on that and they are also investing in small modular of course going out and build it probably building them in the new west in the in the Midwest and they're working on getting new new reactors up just like full full scale reactors
I think that is right.
I do remember the day when I had to walk into the West Wing
and explain that Microsoft was restarting Three Mile Island
and hopefully this was not a metaphor for America
or Humanity's Pursuit of AI.
But yes, you clearly know more than I do about this,
but I do think everything you said lines up with my experience.
All right, can we turn into the,
can we get to the AI skepticism portion of this?
Sure.
Do you use it?
yeah what do you use it for um a lot of times i use it to um reason through something so my day job
is as a professor so sometimes i will say i'm thinking about explaining this idea this way
to my class today can you think of a better way what do you think of this understanding of it i think
um i've never really been one to be like here's the prompt write me the thing copy paste
sometimes I'll say, here's what I've written, what do you think, how could this be misinterpreted, how is this not clear, that kind of stuff.
And then I am the proud parent of a three-month-old, and my wife and I remark all the time that we don't know how people parented before Claude, because we ask Claude a ton of questions about what's normal and what's not normal for, you know, infant development.
So, Claude, that's the one you're your go-to.
Yes, but I'm not endorsing pro.
No, no, I'm just, I'm curious.
Yeah, yeah, yeah.
That's the one that you found the most useful as Claude of the different LLMs that you've played with.
Yeah, but I don't know that I've done like an extensive survey.
And I also, I don't represent them to anyone, but I have done some consulting work for Anthropic in the past, which I guess I should disclose now that I've talked about their product.
But that is not why I mentioned it.
Fair.
But I mean, they would probably just like, that would bring you in contact with it and you would end up using it because you've done some consulting work with them.
so it just makes sense that you would continue to use it.
I use Claude before I did the consult network, but yes.
Okay.
How has the error rate been hallucinations?
Like, what is your experience with like double checking it and that kind of thing?
If I'm doing it for anything useful, like anything, you know, at all mission critical, I will double check it.
I think where I find it most useful are things where you can.
could kind of immediately tell if it's right or wrong.
Based on your question so far, clearly, I think you know the P does not equal NP, proof of computer
science.
There are some kind of problems where developing the answer is quite difficult, but when
you look at it, a potential answer, it's very easy to tell.
If I asked you, what is the only country in the world that has all of the vowels except
for why in its one word name, you'd probably struggle for a while to come up with it.
If I said to you, does Mozambique have all of the vowels in it except for why in a one
word name, you could instantly look at it. I'd be like, yes, it does. So I think there's a whole
set of challenges and questions where we ask an AI something and we can kind of tell pretty
quickly, is it right or wrong? So if I give it an analogy I'm going to use in class and it says,
I think you should change it in this way. Using my own judgment, I can determine pretty
quickly, do I like that response or not? If I give it some writing I've written and say,
you know, which senses are unclear in this writing? How can I make this puncher and more direct?
I can get its ads pretty quickly and then adjudicate them myself pretty quickly.
So I think it is fair to say I would be wary of it in many contexts because of a
hallucination risk or I would want to build a structure around it.
But I don't know that it diminishes its utility in many other contexts where it's pretty easy
to check, does it work or not?
And by the way, the one that a lot of companies have made a great deal of money on because
it fits this property very well as coding.
Because it's pretty easy, after code is generated, it's pretty easy to tell, well, does it
pass all the tests and does it work, or does it not? So that's, I think, a canonical case.
It's probably very profitable for the companies.
Do you have concerns about AI in the way that we're using AI right now?
Yes.
In any capacity, like, what are they?
I am, I mean, there's a long list of concerns. I think I see significant risks,
technical, societal from our use of AI. I worry a great deal about those.
offloading thinking to the machines, especially people who are, you know, younger and still
learning, I worry we lose, we may lose the skill of writing. I think good writing is a key part
of good thinking in many respects. It is not an accident that I make my students do an in-person
midterm exam because my argument to them is if you're going to be an AI national security
policymaker, you're probably not going to have clawed in the situation room.
There are times that you just have to sit down and be able to do it yourself with the knowledge in your head.
No one's letting Claude in the Schiff, hopefully.
I never use clawed in the skiff, yeah.
So I think, we can talk about national security adoption, if you'd like.
I think there are real risks here, and then I think there's also risks of AI slop and time-wasting algorithms and societal effects from that that.
This is, like, far from my expertise as a national security policymaker, but as a human, I have concerns about.
Just because you open the door, let's put a pin in the human concerns.
I am very interested in that.
Let's talk about national security adoption.
You seem to know the most about, like, Claude and Anthropic, or, you know, you have some relationship there.
Anthropics pretty interesting to me because they have a relationship with the NNSA.
They did this thing where they made a classifier that runs on top of Claude to make sure that it's not, like people are using it to ask the wrong kinds of nuclear questions.
And the way they did this in my understanding is that they reached out to the NNSA and they went into an AWS top secret cloud server, uploaded versions of Claude into it and let the NNSA like Red team it.
Do you, like, how, like, how would you rate that as an effective means of, like, AI adoption in a national security context?
Is that enough to, like, keep things separate?
So let me be very clear here.
I am not speaking for Anthropic.
I know this project very well because I was on the government side of this, but I never have worked on this for Anthropic and I'm not speaking for Anthropic.
Like this specific in NSA project, you were on the government side of it?
I was, I led AI policy for the White House, so.
Yeah, yeah, no, I'm just.
Ultimately, you know, I didn't do it work.
NNSA did the work, but I was well aware of this project.
Fascinating, because I'm excited to hear more about this then.
Well, I will obviously respect classification.
Whatever you can tell me.
I think this project was born of a set of related efforts to say there are areas where the government has particularly deep expertise.
Nuclear, obviously, the number one, but also things like biology and the like, cyber to some degree.
we would like to lend that expertise to the companies
to help them red team their systems
and evaluate their systems.
And the NSA effort is one of them.
The creation of the AI Safety Institute is another,
and it was meant to create totally voluntary,
non-regulatory arrangements between companies and government
to say, let's together explore from each of our own advantage points
what this technology might bring to these areas
of capability, but also of some
concern. I actually don't
think any of that is a national
security adoption story. That is
a domestic AI development,
do it safely kind of story.
One that I'm proud of, and
one that I commend Anthropic for being a part of
and open AI had a relationship, maybe
still does the relationship with the AI Safety Institute, as
does Anthropic. So it's not just anthropic.
But I don't
think it's a national security adoption story.
The national security adoption story is closer
to, well, you know,
what are the ways in which this technology could be useful to the Department of Defense and the intelligence community?
And I think people immediately there go to some kind of sky net thing.
And I'm here to tell you, like, it's less that and it's more DoD logistics is incredibly old and boring and, you know, relies on spreadsheets and, like, unbelievably old technology.
It's feeding accident reports into a machine and looking for patterns to make sure that you order a wing part before the wing falls off.
lead on your homework because helicopter maintenance was one of the things that
duty has worked on for AI for a long time. So it's a lot of kind of unsexy stuff, but, you know,
there's the old Napoleon adage and army marches on its stomach. That is ultimately the kind of
stuff that helps the nation win wars. So it's really important to do. And that's a huge portion
of what the Department of Defense does. So that's the kind of stuff where I think there's a lot
of things that we can use AI for. We have to use it well. We have to have guardrails.
There's a ton of complexity there. But when I'm not a lot of stuff, we're going to
I speak of national security adoption, I'm thinking more of that than I am of better red team
evaluation through NNSA or the ASA safety institute.
I mean, I think red team evaluation with the NNSA is pretty fascinating.
I understand that there's a distinction there.
Yeah, it's definitely fascinating, and I commend NNSA for doing that.
Okay, but it's not just, like, I do agree that, like, I think the overwhelming use case for the DOD
in other government agencies
is going to be boring
like spreadsheet stuff
that we don't talk about
but that is not the only thing that's happening
right? There are
like Andrew Pallantier
there is
I can't remember the name of it but there is the
there is a company
that has an AI model
that's running on one of those little mini tanks
and it can move
autonomously on the battlefield and has
shown it off
these things are going
into weapons, right? That's the, like, that's, that's happening. It's not like, it is not going to
happen. It is happening. Where for you is the line? And how do we, how do we make sure, as much
as I hate, like, bringing in Skynet in these conversations, because it's a very tired metaphor and
cliche, but like, how do we make sure that doesn't happen? We had three sets of guardrail, some of which
predated me and some of which I helped create. The first is a deity document, which,
has been around for a while, I'll call it 3,000.09, like the most Pentagon thing ever, like,
name the thing with the number. And this set out pretty clear guardrails for what appropriate
levels of human judgment were for different kinds of weapon systems and the like. And, you know,
very, very little is even close to fully autonomous. And that is a document that, at least as I
understand the Pentagon's procedures, which I don't claim expertise over, it governs directly
the kind of weapons that are built.
Then there was an additional document that I worked on called the National Security Memorandum,
which was direction from the president to the intelligence community and the Department of Defense.
And that included a framework for AI governance, which laid out a set of principles and directions that circumscribed those agencies' use of AI.
My recollection is that on lethal autonomous weapons, it deferred to 3,000.0.9.
So you didn't have more guidance on that, but it related to things like, how can you use that?
AI and intelligence analysis or how do you use AI for all the boring things and what should
the guardrails be there and so forth. And then the last document is something called the
political declaration on the use of autonomy and military systems. And this was saying to the
world, look, here's how America thinks about its principles in this. This essentially is a higher
level version of the first two documents we talked about, but this is trying to build an international
consensus on what right looks like for the use of AI and military systems. And we got
out 56 nations to sign up to that during our four years in office.
So those, my views, I think, are probably closest to those documents, though this is not an area which I spent a ton of time.
That's great, but we have an administration that renamed the DoD of the Department of War, and judging by their actions are not big on looking at paperwork.
Yeah, I'm not here to, I mean, the Trump administration needs to speak for themselves.
It is worth noting they have not yet repealed any of the documents that I just mentioned.
They have repealed other parts of our AI, you know, both executive orders and the like.
They have not yet repealed the National Screen Memorandum, 3,000.0.09 or the political declaration.
Now, you may say, well, they haven't repealed it because they don't follow it.
I genuinely don't know.
But they have not come for this set of documents the way they have for others.
I hope that they know the documents even exist
They know they exist
They know they exist
They do know they exist
Okay
Yeah, yeah, yeah
Because I'm sure they're going to rewrite them at some point
I'm sure it's coming
I'm sure it's coming
What it looks like, I don't know
But it's worth doing like
I want to be coming up with this
Because there are a lot of career people
In the Department of Defense
I know a lot of been fired
A lot have left
But there's a lot of people who say
All we want here
Is just some sense of worth of box
is. You tell us what the lines are. We'll play within the lines. But where we're reluctant to do
anything is when we don't have certainty on what the political leadership wants of where the
line should be. And that's what I heard in 23 and 24 and all of that way. Just give us the
clear guidance. And I really respected that. And renaming aside, firings aside, departures
aside, there remain a lot of very talented people in the military and on the civilian side of the
Department of Defense, who I think fundamentally still have that view and will help shape where
this goes.
So give me an idea of what the box was like that y'all crafted.
Well, the lethal autonomous weapons box three of the three of nine predated us.
That duty had done that before.
But that, my understanding of that was that it was, it was pegged to what was called
the appropriate levels of human judgment.
And that was kind of on a system by system basis, how much human judgment is required.
It is worth noting that lethal autonomous weapons can feel new and different, but we have had them in some form for a very long time.
And the canonical example is air defense, where you're going to have an interceptor fired at a missile coming in.
The decision to fire that is probably happening in a very small number of seconds.
So the Patriot missile system, the Aegis missile system, my understanding, these anti-air systems have had autonomous or semi-autonomous.
modes since something like the 1980s, 1990s.
And there's a lot of procedures that have, you know, built up over decades of when do we put
these systems into those modes, how do they perform, all that kind of stuff.
We did not come in and redefine any of that.
That is long baked.
And from what I know, which was not solely a ton, the White House is a pretty far
way away from like military, you know, tactical procedure.
But from what I know, I was comfortable with everything I saw there.
I think the classic example of this would be, and I'm going to screw up which ships specifically,
but there are American ships that have guns on them that if something comes at the ship very fast,
the gun is going to automatically track it and destroy it.
I think it's called the phalanx system, but I could be wrong.
But yeah, that is a canonical example.
I believe...
And that's been around for like 20, 30 years.
Yeah, and I believe an American ship used that system recently, maybe a couple of years ago.
in some of the Yemen operations where there was a missile coming in.
And I think all of us would kind of reflexively sit here and be like, yes, that's okay.
We don't want Americans dying destroyer out there.
So that's the kind of view of the stuff where I think there's a really broad range of what counts as autonomous and the like.
Yeah, I mean, I think it is, it's, I am worried about broadly,
new startups that don't have the kind of ingrained culture in them that this administration is rushing to embrace
that are move fast and break thing culture meeting um the lethality and unproven and uh untested systems being
deployed in war and i i don't know i think it's a i think it may have we may have some unpleasant
consequences down the line, when an Anderol or a Palantir or one of these other companies
gets the big contract and deploys the big weapon that runs autonomously or semi-autonomously.
So, you know, time.
I may be less pegged to, like, which company develops it, because I think startups bring
a lot of advantages and so forth.
And it's not like everything that comes from a big company, especially a big defense
contractor, is great.
But I hear you that there's a lot of real complexity here.
and, you know, we did our best in an earlier technological moment when the technology was not really mature to be used in this way,
we did our best to set out guardrails, and now it's the Trump administration's turn, and we'll just see what they do with it.
And it's hard for me to know where things are right now.
All right, let's go back to, can I ask you a question about the NNSA project so much as you know?
Yeah, you can.
What were you thinking on timing for this?
I was supposed to have a 12-lock like a thing.
I can try to push it back a little bit if you...
Oh, yeah, we're running up on time.
Let me ask you...
What, all right, can I get two more?
Two more questions?
Go for it. Yeah.
All right.
One, the NSA question.
What are the nuclear concerns there?
Like, what are they red teaming for?
What are they worried that an LLM will give someone?
I think there's an interesting open question.
around to what degree can LLMs either extract or maybe at some point discover knowledge that is very sensitive nuclear knowledge.
And I think that's what they were probing at.
And I think that's a, you know, do I think LLMs are going to enable atomic war, you know, atomic warfare tomorrow?
No, no, I don't.
In fact, not.
But I think this was one project amongst a number when we were trying to say,
Of the things the government knows, of the particular expertise the government has, how are LLM is doing?
What kind of knowledge can the government bring to the emerging science of LLM evaluation?
So NNSA was one.
I wouldn't, I'm not suggesting you're doing this, but just in general, I wouldn't over-rotate on the NSA project.
I think a lot of really meaningful work in this space happen to places like the AI Safety Institute that was a broader base of things where maybe the concerns are a little bit nearer term, biology, cyber, things like that.
All right, let's go back to my pen and take it out.
Societal concerns.
Can you talk a little bit more about the slop that's flooding our feeds,
the short form videos that, you know, of Shrimp Jesus,
and, you know, something that I've been concerned about in the last, like, six months,
saw it in the Cuomo campaign.
Somebody's running ads against Ossoff that are AI-generated videos of him
that are things that he hasn't said.
how do you feel about all this?
What are we doing here?
I have concerns about this.
I have concerns about the societal impacts of AI.
I don't know how much of this is a government thing to address, to be clear.
I think it is very clear to me that there are and should be limits on the government reach on freedom of speech and the free markets and the like.
And it's not that I would be against all AI regulation, but I would be hesitant to.
before we developed a set of AI regulations that were, you know, saying what kind of videos you could make or whatnot.
So there may be edge cases where we would want something, but I think this is a case where there could well be very negative societal effects from AI that pose real challenges to human development, human flourishing, and the like.
And I think it is incumbent on the companies and is incumbent on all of us to figure out how do we get the most of this technology without going down, you know, some very slippery slopes of disempowerment and disengagement and, you know, really negative consequences for our society.
But this is, again, far from my area of actual expertise as an AI policymaker.
Ben, thank you so much for coming on to Angry Planet and walking us through this.
Can I call on you in the future if I'm working on an AI story?
Thanks much for having me.
Be happy to talk to you in the future.
I appreciate the depth of your questions here.
This is a good conversation.
And you put your finger on a lot of subjects that even the people who are working on it should be honest and say,
we do not have answers and we're working through it.
Excellent.
Thank you so much.
Talk to you soon.
That's all for this week, Angry Planet listeners. As always, Angry Planet is me, Matthew Galt, Jason Fields, and Kevin O'Dell.
If you like the show, please go to Angry Planet pod.com, kick us $9 a month.
You get early commercial-free versions of the mainline episodes and all the written work.
We will be back again soon with another conversation about conflict on an angry planet.
Stay safe until then.
