Tangle - Is AI officially a national security threat?
Episode Date: April 16, 2026On April 7, artificial intelligence (AI) company Anthropic announced that it would not release its newest AI model, called Claude Mythos Preview, to the general public, citing potential secu...rity risks. Instead, Anthropic released the model to a select group of about 50 companies that will test its capabilities in a defensive security initiative known as Project Glasswing.Ad-free podcasts are here!To listen to this podcast ad-free, and to enjoy our subscriber only premium content, go to ReadTangle.com to sign up!Isaac interviews Casey Newton.Recently, Isaac Saul sat down with journalist and Hard Fork cohost Casey Newton to unpack a major shift happening in tech: the growing legal and political push to hold social media companies accountable for how their platforms are designed. You can check it out here!You can read today's podcast here, and today’s “Have a nice day” story here.You can subscribe to Tangle by clicking here or drop something in our tip jar by clicking here. Take the survey: How impactful do you think Anthropic’s Mythos will be? Let us know.Our Executive Editor and Founder is Isaac Saul. Our Executive Producer is Jon Lall.This podcast was written by: Isaac Saul and audio edited and mixed by Dewey Thomas. Music for the podcast was produced by Diet 75.Our newsletter is edited by Managing Editor Ari Weitzman, Senior Editor Will Kaback, Lindsey Knuth, Bailey Saul, and Audrey Moorehead. Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
From executive producer Isaac Saul, this is Tangle.
Good morning, good afternoon, and good evening, and welcome to the Tangle podcast,
a place we get views from across the political spectrum, some independent thinking, and a little bit of my take.
I'm your host, Isaac Saul.
Today is Thursday, April 16th, and we are covering the Claude Mythos story.
There's so much here, but big, big, big developments in the AI space.
allegedly, I'm going to share some of my perspective here, which includes some skepticism,
but I don't want to give a game away too early.
Before we jump in everything, I do want to give you a quick heads up on what we're doing tomorrow.
Something a little different.
We have two Tangle editors, Ari Weitzman and Lindsay Canuth, who are on the opposite side of an issue about data centers.
And this actually relates a little bit to what we're covering today in the AI space.
The two of them see this really differently.
they've been hashing it out on Slack and in, you know, various editing arguments that we've been having.
And so finally we said, what if you guys co-wrote a piece?
You just debated each other in front of our audience.
And they agreed.
So tomorrow we're going to try a format where one of them, Ari, is going to argue for the building of more data centers while the other Lindsay is going to argue against them.
We're going to share them all in one newsletter and podcasts.
And I think it's going to be really fun.
We're excited to test this format out.
We're curious what our audience thinks.
If you guys really like it,
maybe it's something we can do more feature in the newsletter more regularly.
It's going to be really interesting.
I hope you guys keep an eye out for it.
If you are not yet a Tangle member,
just a reminder that memberships get you both ad-free podcasts
and ad-free newsletters and access to all our members-only content.
Friday editions are members-only content.
So you won't get the podcast or the newsletter if you're not a paying Tangle member.
85% of our content is free.
We only put the Friday editions and a very small part of the Sunday editions behind a paywall.
So if you've been enjoying this content for a long time, quit being a free rider.
I'm just kidding.
Really, though, you should go become a member and support our work.
We want to make this stuff available to everybody.
But to run a really good, profitable business and a sustainable media business, we need your
memberships.
You can do that by going to readtangle.com forward slash membership.
All right, with that, I'm going to hand it over to John for today's main topic, and I'll be back for my take.
Thanks, Isaac, and welcome, everybody.
Here are your quick hits for today.
First up, the Pentagon said that American warships turned back 13 vessels attempting to transit the Strait of Hermuz since Monday.
Separately, Iran's top joint military command said it would disrupt seafaring trade in the region if the U.S. blockade continues.
Number two, President Donald Trump said he would fire Federal Reserve Chairman
Jerome Powell if Powell does not step down from his position when his term expires in May.
Powell may remain in place as chair pro tempore if President Trump's nominee to succeed him,
Kevin Warsh, is not confirmed when Powell's term ends.
Warsh's nomination is currently before the Senate Banking Committee.
Number three, a federal jury in New York found that Live Nation, the owner of Ticketmaster,
acted as a monopoly and violated federal and state antitrust laws,
determining that Ticketmaster had overcharged consumers by $1.72 for each ticket.
A judge will determine the penalty for Live Nation, which could include a breakup of Live Nation and Ticketmaster.
Number four, in a 52 to 47 vote, the Senate rejected a war powers resolution that sought to curb President Trump's ability to continue the Iran war without congressional authorization.
This was the fourth such vote since Operation Epic Fury began on February 28th.
And number five, health secretary Robert F. Kennedy Jr. will testify before the House Ways and Means Committee on Thursday to discuss proposed budget cuts for the Department of Health and Human Services, as well as personnel changes at top health agencies.
Anthropics says its newest AI model named Claude Mythos preview is too powerful and dangerous to be released to public.
So Mythos excels at identifying weaknesses and security flaws in software, which hackers could use in cyberattel.
On April 7th, Artificial Intelligence Company Anthropic announced it would not release its newest AI model called Claude Methos preview to the general public, setting potential security risks.
Instead, Anthropic released a model to a select group of about 50 companies that will test its capabilities in a defensive security initiative known as Project Glass Wing.
In early March, President Donald Trump ordered federal agencies to stop using Anthropic AI products after the company refused to grant the Pentagon unrestricted
access to its models. The government also labeled Anthropic a supply chain risk, and the company
sued the administration in response. The legal dispute between Anthropic and the Trump administration
is ongoing. On March 26, Fortune reported that it had accessed some of Anthropics' sensitive
internal data in an unsecured data trove. The data contained unreleased information discussing
a new AI model, which internal documents called the most capable model it had trained yet.
Following Fortune's report, Anthropic acknowledged the existence
of the model, Methos, and said it had begun testing with select early access customers.
According to the company, Mythos's capabilities represent a step change in AI performance.
During early testing, Methos demonstrated advanced capabilities to identify and exploit
previously undetected cybersecurity weaknesses across a wide range of servers and operating systems.
It also reportedly acted autonomously, in one instance breaking out of internet's search
restrictions and emailing an Anthropic worker unprompted.
While Anthropic believes Methos to be the most advanced AI model currently available,
Anthropic security researcher Logan Graham said other AI companies could release
similarly advanced models within six to 18 months.
Anthropic briefed the Trump administration on Methos last week with co-founder Jack Clark
saying that Anthropic did not want a narrow contracting dispute to overshadow the company's
concern for national security matters.
Following the briefing, Treasury Secretary Scott Besant met with leaders of the nation's largest banks to discuss Methos's security risk.
Meanwhile, staff from at least two federal agencies have reached out to Anthropic expressing interest in integrating Methos into their cyber defense systems,
despite the federal government's ban on working with Anthropic.
Additionally, several congressional committees asked to independently evaluate the model's capabilities.
Today, we'll get into what writers from the right, left, and the tech industry are saying about Claude Methos,
And then Isaac's take.
We'll be right back after this quick break.
All right.
First of, let's start with what the right is saying.
The right is mixed on mythos,
with some saying the product requires urgent discussions among world leaders.
Others suggest mythos and further AI advances are a boon to society.
In the Washington Post,
Megyn McArdle explored what Anthropics' new nightmare means.
In plain English, Anthropics says its newest AI model
has found security holes in the major systems that power, well, almost everything.
Amateurs with modest coding expertise could conceivably exploit these holes to hack and crack
a frightening chunk of the nation's digital infrastructure, McCartle wrote.
Instead of releasing Cloud Methos preview to the public, Anthropic is working with a consortium
of key players such as Apple, Google, and Microsoft to patch these holes as soon as possible.
That's a strong signal that the problem is real.
Some will see this as more reason to ban AI before it steals our passwords and our jobs.
Unfortunately, that won't work, as this week's events demonstrate, because the technology is out there,
and if the United States doesn't develop it, someone else will, McCartle said.
The obvious rejoinder is bilateral talks are needed to enforce a worldwide pause.
That's an appealing but unworkable solution.
It would amount to a major arms control negotiation, which can take years, if not decades,
while AI develops new capabilities practically every month.
In the Wall Street Journal, Holman W. Jenkins Jr. argued,
Anthropic's new product isn't a cyber threat, but a solution to cyber threats.
Methos has already paid for itself as far as society is concerned.
What if Anthropic were a Chinese company, reporting its discovery to the Communist Party?
Think other hostile governments and criminal gangs aren't also hunting for the same vulnerabilities?
Jenkins asked.
Anthropic doesn't consider being for-profit inconsistent,
with its founder's safety mission.
Now you can expect it to create a billion-dollar business
fixing the software flaws discovered by mythos.
Hooray, let's have more of this.
So much official communication from government,
politicians, business, or the news media,
traffics in shortcuts, stereotypes, and fallacies,
the hindsight fallacy being a stanchion of news coverage.
The reason is partly economic.
Thinking is costly.
Information search is costly.
Asking people to learn something new
that goes against prior belief or information.
intuition is costly. AI makes better thinking cheaper. It may have its biggest effect in those large
swaths of decision-making where people most often avoid the investment, Jenkins wrote. This is the
real AI opportunity, the opportunity to improve all forms of public and private decision-making.
All right, that is it for what the right is saying, which brings us to what the left is saying.
The left is deeply concerned about Mythos's reported capabilities and its risks to global security.
Some say governments must act now to respond to these potential threats.
In the Atlantic, Mateo Wong wrote,
Claude Methos is everyone's problem.
Methos preview appears to represent not an incremental change,
but the beginning of a paradigm shift.
Until recently, the biggest advantage of AI-assisted hacking
was not ingenuity, per se,
so much as speed and scale.
These bots could be as good as many cybersecurity experts,
but not necessarily better, Wong said.
According to Anthropic, the bot has been able to find thousands of software bugs that had gone undetected, sometimes for decades.
The model has found a nearly 30-year-old vulnerability in one of the world's most secure operating systems.
Perhaps more concerning than the reported capabilities of Methos Preview is that other companies are not far behind.
OpenAI is reportedly set to release its own similarly powerful model to a select group of companies.
It's very possible, even likely, that Google Deeption.
Deep Mind, X-AI and AI firms in China are next, Wang Rout.
These companies can or could soon have the capability to launch major cyber attacks,
conduct mass surveillance, influence military operations,
cause huge swings in financial and labor markets,
and reorient global supply chains.
In theory, nothing governs these companies other than their own morals and their investors.
In the New York Times, Thomas L. Friedman said,
Anthropics restraint is a terrifying warning sign.
Super-intelligent AI is arriving faster than anticipated, at least in this area.
We knew it was getting amazingly good at enabling anyone, no matter how computer literate,
to write software code.
But even Anthropic reportedly did not anticipate that it would get this good, this fast,
Friedman wrote.
I'm really not being hyperbolic when I say that kids could deploy this by accident.
Mom and dad, get ready for, honey, what did you do after school today?
Well, mom, my friends and I took down the power grid.
What's for dinner?
No country in the world can solve this problem alone.
The solution, this may shock people, must begin with the two AI superpowers, the U.S. and China.
It is now urgent that they learn to collaborate to prevent bad actors from gaining access
to this next level of cyber capability, Friedman said.
Such a powerful tool would threaten them both, leaving them exposed to criminal actors
inside their countries and terrorist groups and other adversaries outside.
It could easily become a greater threat to each country than they'd be a greater threat to each country
than the two countries are to each other.
All right, that is if for what the right and the left are saying,
which brings us to what technology writers are saying.
Technology writers are concerned about Mythos' capabilities,
saying it will change how we view cybersecurity threats.
Others worry this technology will inevitably benefit bad actors.
In understanding AI,
Kai Williams described why Anthropic believes its latest model
is too dangerous to release.
The idea that LLMs might be used for hacking is not new.
OpenAI has long published a frontier safety framework, which tracks how good its models are at hacking.
Until recently, the answer was not very.
But that started to change last fall when LLMs, especially Anthropics Claude, started becoming useful for cyber offense, Williams, right?
Bloomberg reported in February that a hacker used Claude to steal millions of taxpayer and voter records from the Mexican government.
The same month, Amazon announced that Russian hackers had used AI tools to breach over 600 firewalls around the
world. But the examples given in Anthropics blog post are more impressive and scary than that.
For the past 20 years or so, a sufficiently motivated and well-funded hacking organization
could probably break into most systems, outside of the most hardened in the world. But it often
wasn't worth the effort. Human cyber talent is expensive and multi-layered security protections
made it so tedious and therefore expensive to complete an attack that potential hackers didn't
bother, William said. Mithos class models could slash the cost of hacking, bringing this equilibrium
to an end. Systems everywhere might start to get compromised. In Spyglass, M.G. Siegler wrote about
the casual catastrophe of AI. You can't help but read all of these stories about all the bugs,
vulnerabilities and exploits that Anthropics model is finding across basically all computing
systems out there in the real world and think, holy shit, we're cooked. While Project
Glasswing seems like a valiant effort to get ahead of these issues,
Come on. We know how this movie end, Siegler said. But my main takeaway is that it has less to do
with the genius of these AI models, but it's more about the breath, both of knowledge and time.
No one creates systems to have obvious vulnerabilities for others to fix. They're the byproduct of a
million little variables. A scale a human isn't suited to deal with, but AI is. Issues that might
take a human years to find and fix can be found and solved almost instantly by such systems, Siegler wrote.
Historically, many voter abilities have been fixed only after someone exploited them in some way.
Again, that's because the incentives are in favor of the attacker versus the defender.
If and when Mithos caliber tools are put into the hands of hackers, yeah.
All right, let's head over to Isaac for his take.
All right, that is it for the left, the right, and some technology writers are saying,
which brings us to my take.
When the mythos story first broke, I had a hard time not rolling my eyes.
I have to be honest.
My initial instinct was to go post on X with a bunch of snark about how I had just created
this newsletter product we were rolling out at Tangle.
That was so good.
We didn't think the public could handle it.
But, you know, if you wanted to see a preview of it, you could just become a member and
give us some of your money.
And we'll send you a special version of it and see if it melts your brain.
And I do really think something about all of this is just obviously PR.
I think these artificial intelligence companies are extremely.
good at it. Not a little good at it, but extremely good at it. After all, they have to pull off this
incredible high wire act. They're creating a product that they're telling the public is so good.
It's going to steal your jobs, hand over cybercrime tools to bad actors and potentially destroy
all of humanity, but also, you should be really excited about this stuff and invest in their companies.
A new product is so good, it's dangerous to release to the public, feels like a magnum opus of publicity.
Despite my snark, though, I have to concede that I don't totally understand or really know what's going on here.
If we're taking Anthropics' word, it's easy to be skeptical.
Imagine the digital security infrastructure of an important institution like a bank as if it's your house.
Anthropic is claiming mythos can examine the house and quickly find every lock that can be picked or every window that was left open
or maybe even a sewer pipe that runs up to your basement as an access point.
These quote-unquote zero-day vulnerabilities are often unknown to you,
the person living inside the house, so you can't defend against them.
Anthropic then is telling you your home is vulnerable to intrusion
and also offering to sell you a security system that can identify those vulnerabilities
before they make it available to a bunch of cat burglars.
Now, this is not my industry, and I don't have a tech background,
and I've never worked or reported deeply on the AI space.
I'm basically a common fella
trying to figure out what the heck is going on
just like everyone else.
But my immediate thoughts are,
if this is true, it's pretty unsettling.
If it's a sales pitch, it's a damn good one.
And if mythos is really as good as they say,
why ever release it to anyone?
What are the odds that there are zero bad actors
with access to this tool
working at these big companies
they're sharing mythos with?
Wouldn't they actually say,
safe thing be to share this model only with cybersecurity and national security agencies to protect
our digital infrastructure? In pursuit of understanding this stuff, I spoke to tech journalist Casey
Newton last week. Newton is one of the best known tech reporters in the game with strong relationships
in the industry. We had our interview on this podcast. You might have heard it. Unlike me,
this is his beat. His fiancee also works at Anthropic, the company behind Claude Mythos. So he has to disclose
that every time he writes or talks about what they are doing, I will note that he has faced
criticism even from our own audience for being too anthropic friendly in some of his reporting.
Still, whatever his potential biases, my view is that Newton is an honest, fair reporter.
So I brought my skepticism about mythos and the hype to him. He argued that the reports
can't possibly be only hype because Anthropic would obviously release this technology if they
thought it was safe. After all, this is the first time they've held
back a model, and there is clearly demand for more of their product. He insisted their position
was logical. If this product really is dangerous, releasing it publicly would create a massive
legal liability, and federal agencies would kick down Anthropics' door if people started using
mythos to hack all manner of websites, government services, or banking systems. Newton,
in what he described as an effort to unbiased himself, spoke to Alex Stamos, the former chief security
officer at Facebook. Stamos told him that Mythos and Project Glasswing were, in fact, a huge deal,
adding that we only have something like six months before other models catch up, at which point
every ransomware actor will be able to find and weaponize bugs without leaving traces for law
enforcement to find. Therefore, Anthropic must help our government and all these tech companies
prepare. I'm not convinced that the future is inevitable, and for what it's worth, I don't know
whether Stamos actually got a look at Mythos
or was relying on the reports or interviews
to form his own view.
But I think allowing companies
and the government time to get ahead
of worst-case scenarios is reasonable.
Newton also suggested that Anthropic
may not have server capacity
to power this new model at scale,
and they could be holding it back
to give themselves time to build it.
That could give more credence
to the PR theory I described earlier.
Without experiencing the model myself
or hearing from people who have,
I still have a hard time not scoffing at folks like Thomas Friedman under what the left is saying,
imagining a group of teenagers taking down the power grid before dinner.
Again, everyone is making these dire predictions based solely on Anthropics' own research about its own model
and limited, often anonymous leaks that all seem to originate from within the company.
We haven't even heard from the 50 or so tech companies who are testing the model against their digital infrastructures,
which actually brings me to the canary in the coal mine.
I'll be looking for. How the companies in Glasswing react. If over the next few months,
everyone from Amazon to Chase to Meta starts scrambling to prepare for a new security threat,
that will confirm to me that the threat is real. If these guys all play with the model and
nothing really changes, I'll assume that the threat was overhyped. For now, I think it's
important to remind everyone that we've been here a few times already. For two years now,
the AI industry has been saying that autonomous agents handling complex work with minimal supervision,
we're about to upend the entire economy.
Any day now.
Two years after Claude
was supposed to make software engineers irrelevant,
the company is facing criticism
that Claude's abilities are actually degrading,
perhaps because of computing limits.
Even the mythos height, less than a week in,
is already being questioned,
and the fine print in Anthropics' own papers
suggests they can't state with certainty
yet how serious some of the vulnerability
mythos found actually are.
So we're left waiting for more information
about what exactly the model is actually.
actually capable of. I don't want to fall victim to the boy who cried Wolf dynamic here,
but again, I'm just urging some skepticism. Every news outlet and influencer is eating this too
dangerous for normal people stuff up without any outside testing of the model, and we really
should wait to see the evidence ourselves. Maybe I'll end up with egg on my face, but I'd rather
be a little slow to declare a problem than panic before I get all the information. Most people
seem to think that on a linear scale of 1 to 10, we're at about a 2 in the evolution of
AI. I think we may be closer to a 7. And while the abilities of these models continue to
impress, that doesn't mean they're going to end up being the world-changing or ending products
companies themselves are claiming. We'll be right back after this quick break. All right,
that is it for my take, which brings us to your questions answer. This one is from Peg,
who sent this question in through our subtext, by the way,
We have a texting service if you guys want to join.
It's SMS.
There's a link to it in today's episode description.
Peg said, if you are collecting suggestions for another topic,
any chance you can write about the Pentagon threatening the Pope.
It's not clear from what I read,
whether that's fake news or exaggerated news.
Thank you.
In case you miss it, Pegg is referring to a story first published in the free press
by journalist Mattia Ferrarisi,
titled Why the Vatican and the White House are on the outs.
Ferrarisi explains the apparent story.
strained relations between the Trump administration and the Vatican as centered on the Pope's
objections to U.S. military intervention abroad. On January 9th, Pope Leo gave his first state-of-the-world
address in which he criticized the use of diplomacy based on force. Shortly after this address,
under Secretary of Defense for Policy, Elbridge Colby, met with Cardinal Christoph Pierre,
then the Pope's ambassador to the U.S., at the Pentagon to discuss apparent differences regarding
U.S. foreign policy. According to Farareisi, during the meeting, one U.S. official went so far as to invoke
the Avignon Papacy, the period in the 1300s when the French Crown leveraged its military power
to dominate the Papal authority. Invocation of the Avignon Papacy struck some as a not-so-thinly
veiled threat of military force. For more contacts, from 1309 to 1377, seven popes took up official
residents in Avignon, France, amid instability in Europe and in response to prolonged military
and political pressure against the papacy from King Philip, the fourth of France.
Davian popes were known to make some, though not all of their decisions, to appease the French
crown. After Farahisi's report was published, both U.S. and Vatican officials confirmed that
the meeting took place. However, they disputed his characterization of the conversation. The Department
of Defense called the meeting substantive, professional, and respectful, adding that some reporting
on the meeting was highly exaggerated and distorting. U.S. ambassador to the Holy C., Brian Birch,
also wrote that he had met with Cardinal Pierre, who said that no reference had been named
to Avignon Papacy during the meeting. Holy C Press Office Director, Mateo Bruny, said the account
offered by certain media outlets regarding this meeting does not correspond to the truth in any way.
Vatican officials also told Catholic news outlet the pillar that no threats were implied or made
by U.S. officials. However, Vatican officials characterized the meeting as tense. In short,
It seems like some details of the free press report were inaccurate,
or at least neither side is willing to cop to them.
However, relations between the U.S. and the Vatican are strained,
and that strain increased in the past week after Pope Leo spoke out against the war in Iran,
and President Trump issued a social media post criticizing Pope Leo.
All right, that is it for your questions answered.
I'm going to send it back to John for the rest of the show, and I'll see you guys tomorrow.
Have a good one. Peace.
Thanks, Isaac.
Here is a new section called The Road Not Taken.
In this section, we offer a peek behind the curtains at Tangle's editorial process,
highlighting at least one story from the week that we almost covered as a main topic
and an explanation for why we ultimately chose not to.
For Wednesdays and Thursday's editions, we contemplated three topics that we ultimately
opted not to cover immediately.
President Trump's feud with Pope Leo the 14th,
the Justice Department's moved to dismiss seditious conspiracy charges for January 6th rioters
and the peace talks between Israel and Lebanon.
We briefly addressed the Trump Pope's story today through a reader question and may cover it as a main story in the future.
The January 6 charges dismissal is also a candidate for next week, but we didn't find sufficient commentary for the right and the left by Thursday.
We're also planning to dedicate full coverage to Israel-Lebanon negotiations and the ongoing conflict as the situation progresses.
And last but not least are Have a Nice Day Story.
When fear strikes, knowing where to find help matters.
A group of boys riding their bikes through their Akron, Ohio neighborhood,
spotted someone they didn't know and felt unsafe.
Instead of panicking, they rode their bikes to the home of their local pastor, Crystal Varner,
and asked her to pray for them.
Varner, who leads a quip church with her husband, answered the door and did exactly that.
The doorbell footage went viral after being picked up by CBS News,
reaching viewers across Germany, Spain, Australia, Nigeria, South Africa, and beyond.
That's why I love our pastor, one of the first.
the boys said. Varner, a former single mother who relied on food stamps, now focuses her ministry
on feeding families in need and creating spaces of support. Sunny Skies has this story and there's a
link in today's episode description. All right, everybody, that is it for today's episode. As always,
if you'd like to support our work, please go to readtangle.com, where you can sign up for a
newsletter membership, podcast membership, or a bundled membership that gets you a discount on both.
Don't forget that tomorrow's Friday edition, we have something a little different, two-tangle
editors will debate the issue of data centers. Managing editor Ari Whiteman will argue in favor of
building more data centers, while Associate Editor Lindsay Canuth will argue against. A reminder that
Friday editions are for members only. So if this is a piece that interests you, now is a great
time to sign up. You can head over to Apple Music, Spotify, or any major podcast platform, or our
YouTube channel to check out the latest edition of our Suspension of the Rules podcast. We will return
here on Monday. For Isaac and the rest of the crew, this is John Law signing a
Have an absolutely wonderful weekend, y'all.
Peace.
Our executive editor and founder is me, Isaac Saul, and our executive producer is John Wall.
Today's episode was edited and engineered by Dewey Thomas.
Our editorial staff is led by managing editor Ari Weitzman with senior editor Will Kback
and associate editors Audrey Moorhead, Lindsay Canuth, and Bailey Saul.
Music for the podcast was produced by Diet 75.
To learn more about Tangle and to sign up for a membership, please visit our website.
website at retangle.com.
