Limitless Podcast - Operation Epic Fury: The USA Banned Anthropic But Used It Anyways
Episode Date: March 2, 2026We explore a tumultuous week in AI as the U.S. government banned Anthropic's Claude AI from military use, only for it to be deployed in the Iranian operation the next day.We analyze the ethic...al dilemmas faced by AI firms navigating government demands, spotlighting CEO Dario Amodei's refusal to compromise on safety. The discussion intensifies with OpenAI's bold offer to the Pentagon, igniting a rivalry that questions corporate power in military engagements.------🌌 LIMITLESS HQ ⬇️NEWSLETTER: https://limitlessft.substack.com/FOLLOW ON X: https://x.com/LimitlessFTSPOTIFY: https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQAPPLE: https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890RSS FEED: https://limitlessft.substack.com/------POLYMARKET | #1 PREDICTION MARKET 🔮https://bankless.cc/polymarket-podcast------TIMESTAMPS0:09 AI Used as a Weapon1:19 The Pentagon's Ultimatum4:45 Dario's Ethical Stand10:51 OpenAI's Strategic Shift14:25 Irony of Military Operations18:00 Public and Private Divide19:26 The Future of AI and Warfare------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
On Friday, the USA banned Anthropic from being used in any military operation after Darrow refused to cater their demands of being used for mass surveillance and autonomous weapons.
Then literally hours later, in the early morning of Saturday, Claude was used to perform and execute the most important biggest military operation since the invasion of Iraq.
This has by far been the most insane week in AI.
There was drama and deceit between the top two AI labs, Open AI and Anthropic, and the Pentagon wanted uncensored access for use of.
of Claude and OpenAI's chatGBT. AIM models are not just chatbots at this point. It's a
geopolitical weapon being used for warfare. It's amazing how the biggest news on Earth is now just
AI news. Not a single large thing happens that doesn't have AI integral to the decision-making process.
And this was no difference. And I think the question that it left everyone at the end of this is
who really controls AI? Because for the first time what we're seeing is these private companies have
so much leverage, so much power, that they're starting to conflict with the actual
elected officials and government. And I think that's kind of at the core of this discussion. But if you
missed anything over the last 72 hours, don't worry. We're going to get you caught up, starting with
what happened early last week that sparked this debate. Because at this time, we didn't know that there
was any war plans happening. No one had any idea that there were any attacks planned. It was just
an AI story. So maybe we'll start with that AI story. That AI story specifically was the news that
revealed that the Pentagon, which is part of the U.S. Department of War, had been using Claude
to orchestrate and execute their capture of the president or former president of Venezuela,
Maduro. And that shocked everyone because up until that point, people were just kind of prompting
it to vibe code stuff and to answer their silly questions about, you know, what they wanted to cook
tonight. So to see this real-life example of an AI being used, not just as a tool outside of a
trapboard, but for something so important as military warfare, was a big shock and surprise,
which then sparked a debate around what the model wanted to be used for. Now, the head of the
Pentagon, Pete Heggseth issued an ultimatum shortly after, which raised suspicions around
what the conversations were like between the U.S. Department of War and the owners of Claude Anthropic,
and it was all but good. The issue that they were facing was Anthropic had been asked to
give them an unscensored version of Claude, which could be used for two things. Mass surveillance,
which included domestic mass surveillance of people within the US, which was a breach of the Fourth
Amendment, and also for use within autonomous weapons, meaning that there was no humans involved
and an air would control how weapons were executed and fired. Darrier's comments against that
was simply he did not feel comfortable, given Claude access. He didn't think it was good enough,
and also that it was a direct breach of law.
So Pete Hexert issued them an ultimatum at a deadline a few days later on the Friday,
saying you either agree to our demands or there are consequences.
Yeah, and it's really interesting to hear how close they were,
but seemingly unable to reach a deal.
It seemed like they had everything down to just what, two of these red lines, right?
And a lot of that conversation happened around whether they are allowed,
whether the U.S. government and the Department of War is able to use these models without the express written consent and approval of Anthropic when it comes to, I guess, making kinetic decisions, things that actually results in harm that's being caused. And there's this interesting interview that I saw or just like kind of report that said that when asked about, what was it, the nuclear weapon? Like if someone shot a nuclear weapon at the United States, does the Department of War have, oh, here it is. Yeah, this is.
perfect. Does the Department of War have the opportunity and have the right to use Anthropic and
Claude models to determine what to do about that, to help shoot it down? And then the response from
Dario was basically like, well, call us first, and then we'll talk through it and we'll let you know.
And I can understand why Dario wants that to be the outcome. And I can understand why the Department
of War is absolutely furious because they're like, you are not the elected official, you are not
the military, you don't have the right to sign off on our nuclear plans. But for Dario,
He very much feels like he created this incredibly strong tool. And what is Anthropic known for at the core of its DNA? Well, it's safety. It's AI alignment. And I'm sure they want to feel like they have a heavy hand so it doesn't get out of hand. And I think that's ultimately where this conflict came from is Anthropic wanting to abide by their safety principles. But the Department of War and the government and the military really being like, okay, yeah, but we're the military. And like if someone's attacking us, we need to use all the tools at our disposal. And we can't be waiting for you to answer the phone to tell us if it's okay.
not. Yeah, it's the crux of the issue comes down to the contractual language. The Pentagon was willing
to say, hey, yeah, you can keep us within all means of legal law. And Dario's response was simply,
the legal law isn't really prepped and covered for the future of AI. Like right now, you could
use our model legally to get access to a bunch of people's data. And you can just get away with that.
And Dario, his own fundamental ethics behind building Anthropic wasn't comfortable with that.
But the U.S. Department of War's response was simply, hey, this is a matter of national security,
and we can't have a private company, a private, unelected official dictate how we perform national
defense, which you can see fair takes on either side at this point, and it's extremely complicated
and nuanced. And in Dario's exact response, there's this very poignant line that he says,
we cannot in good conscience accede to their request. This was in response to Pete's ultimatum on
that Friday, which led to just only like a crazy public, I guess, debate or fight between
these guys.
You've got Pete publicly saying, Anthropics delivered a masterclass in arrogance and betrayal,
as well as the textbook case of how not to do business with the United States government
or the Pentagon.
And a bunch of responses were released after that showing that Dario had not been answering
their phone calls or was just being inflexible.
And then Dario on his side was saying, we need these contractual language involved, because
otherwise this act could be used for nefarious purposes. So it's just so, so much trauma.
In addition to the Secretary of War having some choice words for Anthropic, Donald Trump chimed in
with a rather angry and loud, all-caps message saying, the United States of America will never
allow a radical left-woke company to dictate how are great military fights and wins wars,
among other things. And the public backlash, the public sentiment around Anthropic and
Trump kind of shifted at this moment to being supportive of Anthropic. They were glad that they were
standing on its morals and its values. And as a result, the App Store showed that Claude actually
became number one in the world. And a few weeks ago, is only 131. And this part of the show is brought to you
by our sponsor and a supporter of the show, Polymarket. And Polymarket is a great way to determine things
like who is going to be the number one app in the app store on March 6. And what's interesting here is
there's a 62% chance that the current leader actually changes hands. It's showing that ChatGPT is going to be the new
king on the block. When in reality, there's another market that shows.
because Anthropic is actually most likely to have the best model by the end of March,
which is loosely the same exact time.
And I love how they've used this to kind of gauge what's the best,
because now we kind of have an idea that there isn't going to be GPT6 isn't coming out this month.
But we know for a fact that Anthropic has the winner with Opus 4.6.
I was just looking and wondering why Chad GPD might be taking the lead here,
despite there being so much positive approval for Claude.
And that might have something to do with our friend Sam Olman at OpenAI,
who swooped in at the last minute
after all the drama between Dario
and the US Department of War
with his own proposition,
basically saying,
hey, you can use chat ChbT instead
and will agree to your terms.
As long as you want to keep things
within lawful use,
we're going to draft up our own safety stack
and red lines.
What do you think about this?
And the agreement was pretty extensive.
They put out an open statement.
Now, there's a lot of minutia in details,
but the way I see it,
or my favorite highlights from this,
is they pretty much agreed to the simple terms,
but there was some slight changes in the form of,
they agreed that open AI won't be used for mass domestic surveillance,
no use of open air technology to direct autonomous weapon systems.
So these are the two things that Dario wanted,
but it's all under lawful use,
which is the issue that Dario had.
And then there's a third thing,
which is no use of open AI technology for high stakes automated decisions,
a aka they should always be a human in the loop
and conceivably held accountable for any court of law
going forward. This is crazy. This is the part of the story in which I just kind of lost my mind because
it didn't make any sense. It was like, okay, the Pentagon is saying no to Antthropic is saying no to the
Pentagon. Clearly they can't figure it out. Sam Alman was on CNBC earlier in the day supporting Anthropic.
And then that evening, they signed the deal with the Department of War that is supposedly
the same exact terms because they didn't want a redline. And this was like, oh my God,
what do you mean? Was Sam just manipulating the world that he just slide in and actually
steal the deal from Anthropic, and in a way he did, it appeared as if the Department of War
was trying to call Dario at the 501 deadline. He didn't answer the phone. They gave him a couple
minutes. They picked up the phone called Sam, and now there's a deal. And to your point, it seems like
there is this key difference. And while a lot of the morals that they were standing on are the
same, the key difference is basically in the responsibility and the lawfulness. Like, one is kind of
proactive, one is retroactive, where Anthropic wanted the ability to sign off on things, whereas OpenAI
is saying, well, you are the government, you are the military, you can make these decisions so long as
they are lawful and so long as someone is responsible for like kind of claiming responsibility for
these decisions. And we kind of know how that works where, I mean, perhaps that is not as foolproof
as Anthropics plan, but it resulted in them getting a, what was the $200 million deal and a lot
of publicity with the government. So it was a big win for Open AI and the point of the story where I was
like, what is going on here? This is chaos. And mind you, this is just.
hours before the actual first strikes were about to start. So there was a lot of things happening
in anticipation of this mission. Yeah, all of this happened within like, I can't emphasize this
enough. It happened within like four to six hours. All of this happened. I was sitting on X scrolling
and I was like, I was sharing something and I was like, oh my God, wait, like a new thing happened.
Then a new thing happened. Friday night was not a night to go out because the internet was at its peak.
The truth was being revealed. I think X had their highest amount of engagement over the weekend.
And Saturday and Sunday broke both new records.
It's just insane.
But back to the Sam agreement, they agreed to all lawful use.
And the explicit difference there is that they'll settle all kind of grievances in a court of law.
So retroactive, as you just said, which is the thing that Dario was just completely against.
But there's also some other important safety lines that they put in that I actually think are useful towards addressing this.
So one, the models or chat GPT can only be deployed through the cloud.
And the reason why this is a better implementation
versus letting the government run it locally
is that you can monitor and you can track
what they're doing to make sure that they're not doing anything nefarious.
Number two, Open AI has a specific vetted team
of American software engineers
that will always work on these models
and will update them.
And the best part is the government is hands off
on this entire approach.
And then the third important point is Dario's agreement,
his problem was around usage policies.
So he basically wanted to,
to dictate when the Pentagon could or could not perform, let's say, a military strike.
Whereas in Open AI's deal, they use usage policies and a software stack that kind of
helps them navigate through all of these different legal issues. So it's just a much more
detailed and nuanced plan. A lot of people are kind of like against open area for this,
but this might be a hot take. I actually think it's a very proactive way to kind of deal with
this situation for what we have right now. I think legislation will change eventually going
forward. I don't think it's perfect. I don't think it's ready for AI-enabled warfare,
but I think it's a good step in the right direction ultimately. And there was this really
awesome comment from Open AI's head of national security, Katrina, who kind of explains these
nuances and saying that the safety stack and usage policies that we've set up here is going to
be a more reliable one. They called out Anthropic basically saying that it wasn't well thought
out and ours is way, way better. The other final cool part about this agreement is that
Open AI explicitly states to the Pentagon that they should offer these terms to every single
AI model lab. So they're not trying to secure an exclusive deal. This could be for anyone. And
Open AI is just the first bundle. Can we take a moment to just appreciate the fact that Open
AI, the AI company, does have a head of national security partnership. I think this gets to the
core of the message of this episode is, and the message of this entire narrative this weekend is
who is really in control of this? And not like when I say in control, in control of everything.
Who has the leverage to make the decisions at the end of the day?
And it seems like they're, I mean, prior to Open AI signing this deal,
it seemed like they were forming this kind of force against the government, right?
This oppositional force where Anthropic was like,
we need this to be safe.
Open AI and Sam Alman went on TV and agreed.
Google and a lot of employees from that company and DeepMind were kind of on board.
They were saying, we're going to draw these hard lines too.
We're not working with you.
And it created this interesting power dynamic where they actually did have enough leverage to
inflict damage on, I guess, matters of national security, on the military and limit their ability
to use these prime tools. And it gets into this interesting debate of who should be responsible for these
decisions. I mean, a lot of people will say, the military, they've been elected, they are the officials,
they understand, they're held responsible for keeping us safe and protected, and they deserve the best
tools. And the open AI and the anthropics and the AI companies, they'll say, but you don't understand
how these tools work. You don't know how capable they are. You don't understand the nuances within them.
and we have spent our whole life trying to design these safely.
Therefore, you should trust us to make this decision.
And I think it's step one and it's like event number one in a probably longstanding
kind of argument that could happen, which is who actually holds the leverage over who
and is there a willingness to work together or is this going to be this divisive thing
where there's a band of private companies and there's a band of public entities and they are
clashing because they have the same goals, but they are at odds with how they get accomplished.
And I think this was just an interesting moment of time to kind of reflect on that part in particular.
Well, we didn't even mention the craziest part about all of this, which actually answers your question, which is no one knows.
For the actual military operation, Epic Fury that was performed over the weekend, it was enabled by Claude after being blacklisted and after being banned completely.
That's so ironic, huh?
Yeah, it's ironic.
So, you know, you had the Pentagon creating this entire Fust, Dario saying, okay, cool, we'll give up means like you can transition to your.
use another model. They signed a new deal, a $200 million deal with Open AI. And then they ended up using
the model which they explicitly banned by the president himself. So it goes to show that there's a lot
of nuance with this. I think Claude had been used for well over six months within the Pentagon right
now. So it's trained on all of its data. It's being used by all the employees. It's something that
they have here. And technically, they do have another six months to a transition to another model.
So it makes sense that they were still using Claude. And it's obvious that Claude is the current and
preferred choice right now. And that'll probably change over the next couple of months. But yeah,
it's a very unsubstantiated or undefined vector moving forwards. I think US has a lot of angles
towards this, meaning they want to upgrade their military offense, but also they're cautious
and curious of the rising and looming threat from China, potentially taking over Taiwan and
a bunch of other things. So they just want to get ahead of these things. And if they could leverage
top American AI model labs to work with them, specifically,
work with them, that'll be the advantage that they want. Yeah, and it seemed like it was used
in terms of like the actual implementation for three things. It was, um, for intelligence assessment,
for target identification and for simulating battle scenarios. So the AI isn't directly guiding missiles.
It's not doing anything kinetic. It is mostly just for informational purposes. But yeah,
and I think that's where a lot of this discourse comes from. Now, Sam had a really interesting
AMA where he was kind of answering questions too, right? Yep. About kind of the public sentiment
addressing them, doing it live in real time, answering people's questions the night of.
He goes, I'd like to answer questions about our work with the Department of War and our thinking
over the past few days. Please ask me anything. And his three takeaways was super interesting.
Number one, he was surprised at how much 50-50 debate there was between whether warfare in America
or national security should be the judgment of elected officials or unelected private companies.
It seemed like a lot of people were like, yeah, Anthropic maybe should have some more involvement here in setting the guidelines of to what we use AI within warfare.
And then a bunch of other people saying, no, we elected officials specifically for this.
They should be the ones doing this.
The second biggest takeaway is there's a question around whether companies like OpenA eventually become nationalized by the government because their technology is so important and crucial towards things like defense and the economy.
And he goes on to say this was really revealing.
He says, I've thought about nationalization, of course, and for a long time, it seems like it might be better that building AGI was a government project, which kind of shocked me there because I understand the existential crisis here, but, you know, that was super cool.
And then the third thing that he states is people take their safety for granted, basically saying that people don't really realize the lengths and extends that the Department of War and Defense need to go to to protect them. And this is just a misunderstanding through public discussion and nuance.
Yeah, the government-backed project is super interesting because in the past when we have done things like this, the Manhattan Project, companies like Lockheed Martin, who had a lot of government support, they've worked very well because it allows you to kind of converge resources and talent into a single motive and you get the legislative protection to build as fast as possible. The issue now is there's just this lack of efficiency and capability within those same entities that did this in the past. And the market forces will not allow it with the amount of capital needed to build these
gigantic AI data centers, you can't extract that from taxes. You can't, like, validate it by printing
more dollars. You actually just have to make revenue and do this in private markets. And I think
that's the slightly uncomfortable truth is that it's just too expensive and too challenging to do this
any other way. So there has to be this divide between private and public sectors, because it's the
only way that you can kind of garner resources this effectively to actually deploy them at the
scale required to build AGI in the first place. Yeah. And there was this other take, which I thought
was super interesting. Rune asked, are you worried at all about the potential for things to go
really south during a possible dispute over what's legal or not later, or be deemed a supply chain
risk? Samolyn responds, yes, I am. And if we have to take on that fight, we will. But it clearly
exposes us to some risk. I'm still very hopeful. This is going to get resolved. And part of why we
wanted to act fast was to help increase the chances of that. So again, reemphasizing the point we
made earlier, he's taking the approach of take action now and we'll figure it out later,
as long as there's certain stipulations from the government saying they'll do it within lawful
use and that there will be a human in the loop, that you won't have AI's autonomously firing
weapons at random people, because the models just aren't good there. But it's relatively
uncertain. This land is very uncharted. We don't know where this is going to end up.
And to be honest, it's going to be a very significant debate probably for the next couple of years.
I don't think this is going to be a one-off event.
It is certainly the craziest four-year hours that we've had in 20206 so far,
but it is by no means at its end yet.
Yeah, it's been absolutely insane.
And now you are mostly caught up on everything that happened this weekend.
It was nuts.
And I mean, it's to your point, Ejas, I think it's not a conversation that's going to end here.
I mean, just in the last, what, in the two months, we've went to Venezuela and now Iran,
and there's clearly more intent to apply this to the real world.
And as these models get more capable, as they're able to actually do more things,
is these debates are just going to keep heating up.
But this one was crazy.
I mean, I haven't been glued to my phone like this in a long time.
And the plot twist, like, this is better than any sort of drama TV show, right?
We watched a deal fall apart.
The same person who was backing that deal swooped in and stole it.
And then within hours, the blacklisted AI was used to actually attack another country,
even though a new deal had signed because they still have six months left of contract.
And now Dario and the Anthropic team are upset.
And the public kind of supported them.
So it went to number one in the app store.
and it's just like, my God, there's so much.
Can we appreciate how quickly all of this happened as well?
Like, oh man, yeah.
Like, we got to shout out to X for this because, geez, like the information was flowing.
The information was flowing and it came in in real time.
Like, I felt the hours.
Like, I woke up, I think it was Saturday morning.
I went to bed early, like maybe a normal or lame person.
And I woke up and I saw, I think, a tweet from maybe you, Josh, that was like giving me the breakdown of everything that was going on.
And I was like, how did I miss this?
This was like an hour after I went to bed.
And news was breaking every hour.
It was crazy.
It was absolutely insane.
And it just goes to show that the speed at which AI is accelerating, not just chatbots,
not just video creation, but major important things like national defense, security,
should not be understated and should be a focus of topic for probably a lot of other sectors going forward.
I don't know if we're at this point where we want to get into homework for the listeners here.
but I really want to hear from you
what your thoughts are on this entire debate.
Do you think the Pentagon was in the right?
Do you think Dario was in the right?
Do you think Open Air and Sam actually struck the right resolution?
Or do you think it's all rubbish
and that we need to completely dismantle everything
and rebuild from the grounds up?
Let us know your thoughts in the comments
or even like DMs to us.
Like I really want to hear your feedback.
Yeah.
And if you want to follow the conversation,
we've been monitoring the situation,
we've been publishing the situation.
Follow both of us on Twitter
are on it. They're both linked in the description below. We've been on it. I think between us,
we've gotten like 20, 30, 40 million impressions this week, and it's been crazy. So that is always
where you can see the news. First, before we get on camera, but we will try to keep you updated. If you
have watched this, congratulations, you're now up to date for now. We'll see where things go
throughout the rest of this week, but we have a lot more planned. There's a lot of exciting topics
to cover, and we'll be here with you to cover it all. So thank you, as always, for watching.
I very much appreciate it. Thank you for sharing with your friends, which goes a long way,
for subscribing to our substack, which has been doing very, very well.
There's like 60, 70,000 people that read every single one.
So if you want to get in on the know,
click the links down in the description, share it with your friends.
And as always, thank you so much for watching.
We will see you guys in the next one.
