a16z Podcast - Ben Thompson: Anthropic, the Pentagon, and the Limits of Private Power
Episode Date: March 5, 2026In this conversation, previously aired on TBPN, John Coogan and Jordi Hays speak with Ben Thompson, founder of Stratechery, about his essay "Anthropic and Alignment" and the broader collision between ...AI power and state power that the Anthropic–Department of War standoff revealed. Resources: Follow Ben Thompson on X: https://twitter.com/benthompson Follow John Coogan on X: https://twitter.com/johncoogan Follow Jordi Hays on X: https://twitter.com/jordihays Follow TBPN on X: https://twitter.com/tbpn Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
You might not be interested in politics, but politics has an interest in you.
What is politics? War by other means.
You might not be interested in that.
It is going to have an interest in you.
If we're going to analogize it to nuclear weapons, as Dario Amadeh has done repeatedly,
you have to think through what would happen in a world where a private company developed nuclear weapons.
Its attention has been brewing for years, which is, are you in a world?
American company subject to American law and even beyond law, just morally compelled to support
the U.S. military or not.
A private company built something powerful enough that the government threatened to destroy it
for not cooperating.
That's not hypothetical.
It happened last week when the Department of War designated anthropic a supply chain risk
after the company refused to remove safeguards against mass domestic surveillance and
autonomous weapons. Ben Thompson's response wasn't to defend either side. It was to point out what
almost no one was saying. If AI is as powerful as its builder's claim, the people with guns are
going to want to say. Whether that means the U.S. government compelling access or China deciding
to act because America is getting too powerful, these are no longer theoretical questions.
In this conversation previously aired on TVPN, John Kugan and Jordie Hayes, speaking
with Ben Thompson, founder of Strateree about his essay, Anthropic and Alignment.
We have Ben Thompson in the Restream Waiting Room from Stratory.
Welcome to the show, Ben. How are you doing?
I'm good. Hopefully I have the right microphone turned on this time.
You do, and it sounds fantastic.
Thank you so much for joining on short notice.
Thank you for writing Anthropic and Alignment.
It is a fantastic piece that I think covers all of my questions.
but I want to start with like just how did you process the weekend?
How did you get to this particular place?
And then like what is your key thesis with Anthropic and alignment?
I mean, this is one of those ones.
I don't know if it's good or bad that it came out sort of at the end of the week.
So I had a lot of time to think about it.
Ultimately, I think it was good because I'm not sure anyone very as explicitly made the point I did.
Yeah.
And maybe it was bad because I feel there's a lot of like.
Maybe in retrospect I should have put in the article that would have addressed a lot of the points that people are upset about.
Yeah.
Basically, zooming out, this was not a normative article where I'm saying what's happening is good or bad.
And that's really the one caveat.
I really wish I would have put on there.
I mean, I'm being out there accused about like a Neil Apetelah, like the full-throated fascist endorsement of fascism or something like that.
And it's like, relax.
Okay, can I get some credit for the last X number of years?
basically the and there is a deep rooted concern that I've had for a long time about and I'm now hesitant to even use sort of EA as a term because it's kind of now politicized thanks thanks to the events of the last week but a failure to grapple with a world of guns is basically the long and short of it and I actually think a leaser has been the one guy who's been honest about this where he wrote that time article about
potentially bombing data center someday.
And that's actually a point worth bringing up,
which is all this stuff is right now in the digital realm
with robotics and potential other applications
and it's obviously being used for military operations.
It's crossing over into the physical realm.
But if AI is as powerful as people say it's going to be,
then there are going to be real-world reactions to that.
And if we're going to analogize it to nuclear weapons,
as Dario Amadeh has done repeatedly,
you have to think through what would happen in a world
where a private company developed nuclear weapons.
What would the government's response be?
And that's not to say that the government response in that case
is good or bad, or does it follow sort of constitutional principles
or whatever it might be.
Obviously, I want them too.
On the surveillance point,
I've been concerned about the application of computers to our surveillance laws for years.
Like so many things in our society assumed a certain level of friction in doing things
that computers already obviated and AI is going to just do that on steroids.
I do think we need new laws.
I think all this stuff is correct.
And I think the idea that AI being applied to these commercially purchased data sets
for example, is a huge problem that I don't want to happen.
The concern I have is that if this technology is as powerful as it is on pace to be,
unilaterally imposing restrictions, even if those restrictions are good,
isn't just an issue as far as who rules us, the democracy issue,
that sort of Palmer Lucky, I think, very eloquently raised.
It's inviting very bad outcomes for those asserting that in general.
And I feel there's been a lack of awareness of this.
That's why I brought up the Taiwan-China thing.
This has been a frustration I've had with Anthropic generally.
They talk about, you know, Amade has been very outspoken in terms of opposing selling chips to China.
for in a narrow, you know, aspect, very, very good reasons.
My pushback has always been what happens if we get super powerful AI and China doesn't.
What are they going to do?
Sure.
The optimal thing would be to just bomb TSM out of existence because suddenly that becomes optimal,
even with all the costs that that does.
And then what?
Then what are we going to do?
Like, we're entering this.
Like, I don't like getting into political,
Posts. It's not fun at all. I'm not having fun with this. It's not enjoyable. I can promise you this.
And some people are like, well, you should have just made the post-private. I'm like, no, I actually, I really want Anthropic and people associated with this to read this because people have theorized for a while about what's going to happen as AI becomes more powerful.
and now it's starting to happen for real.
And I guess over the weekend,
Parvo was just I felt compelled to say this
and girding myself to do so.
And even then, I still wasn't,
I haven't waited in this for a while,
and it's no fun, but it is what it is.
Can you unpack a little bit more of that,
that tweet that you posted
where you did the find on the Dario article for Taiwan
and saw that wasn't mentioned?
Oh, I mean, I've just kind of,
I've sort of griped about
this in general. I think that...
So do you just think he should be
talking about the Taiwan issue more
deliberately? He should be messaging that.
Like, why is it important that
why is it significant that he
doesn't mention Taiwan?
Well,
I think the position
about not selling chips to China
is a totally legitimate one. I
understand the argument. I could
make that argument
if I needed to.
I have advocated
the opposite, that number one, not only should we be selling chips to China and a generation
or two behind, which has always been sort of our standard practice with chips, we should also be
allowing Chinese companies to fab with TSM. That is a restriction that has come down.
Now, these Huawei chips are somehow manufactured by TSM, let's not look too closely at it,
but we should explicitly be allowing it. And the reason for that is I think it is a safer
equilibrium to have China dependent on Taiwan than to try to cut them off from Taiwan.
Well, we are dependent on Taiwan.
Taiwan is 70 miles off the coast of China.
It's not an ideal position in the world for us to have a dependency on it and China to not have a
dependency on it.
So this is the problem.
All this stuff has everything going forward has massive tradeoffs.
The implication of letting China.
a fab with TSM or the implication of letting them buy
Nvidia chips is that they gain these incredibly powerful
AI capabilities that is driving this entire debate.
That is in a vacuum, not a good thing, but nothing's in a vacuum.
Everything is a tradeoff.
And in that specific area, I think that just
it's repeatedly again and again being absolutist about the chip
issue when I am frustrated to not see any public comment about the, that's not quite fair.
He has made comments about, oh, yeah, that would slow down sort of the adoption of AI
the long run of Taiwan got, got bombed.
I'm like, that's, in my mind, that's an insufficient consideration of the possibility of Taiwan
getting bombed.
Now, again, I biased in that regard.
I lived there for nearly two decades.
But it's just the, the reason I brought it up in this context is if AI is what it is,
the people with guns are going to want to have a say.
Whether that be domestically, whether that be internationally,
that might be in the context of the U.S. government just taking it,
trying to kill your company because they feel you're not cooperating,
or it might be the context of China deciding it has to act
because the U.S. is becoming too powerful.
And it's not a fun debate.
It does, I do think the nuclear angle is a good one.
It has echoes of the proliferation.
question of mutual assurance or destruction, all those sorts of things.
And that's just going to be the reality of the debate going forward.
And again, it's not very fun, but I think it's also irresponsible to sort of run away from it.
How much attention or what kind of factor do you think the information asymmetry
between the Department of War and Anthropic played last week?
It felt like in hindsight, Department of War knows they're headed into a major,
what is now looking like a drawn-out conflict,
anthropic sitting there thinking,
hey, we have this arbitrary deadline,
why do we need to renegotiate this now?
And then if going off of Emil Michael's timeline,
it sounds like they were still in the final hour
trying to make a deal happen.
And according to Emil, Dario was in a meeting and was busy
and wasn't really respecting the deadline,
which maybe he felt was kind of artificial.
but in hindsight now looks like it was a significant
because the Department of War was taking the country into a conflict
and wanted to know, hey, can we lean on one of our AI partners?
I don't know.
I mean, I think that seems pretty arbitrary to have cut.
I mean, I'm hesitant to speculate.
I don't know what was going on.
I don't know the angles.
I think, and that's why I didn't sort of do.
delve too deeply into it.
And I also think some of the specifics like this supply chain risk probably overbroad.
Yeah.
And almost certainly the way it was stated in the tweet is definitely overbroad if you actually
go and read the statute.
I think the goal that I was, and again, this is where I wish I had sort of put more
caveats to say, like, I'm not actually talking about all that stuff.
I don't really care.
I do care, but that's not the point of this article.
The point of this article is there's all this talk about alignment.
That's why I put that in the headline.
And on one hand, alignment is aligning AI with humanity generally.
But for the foreseeable future, and you could have a philosophical argument about the long-term viability of nation states and the age of the Internet, much less the age of AI and whatever that might be.
That certainly is a more pressing conversation than probably ever before.
Anthropic exists in the context of the United States.
And that's why I put that quote,
you might not be interested in politics,
but politics has an interest in you.
What is politics?
War by other means.
You might not be interested in that.
It is going to have an interest in you.
And my, there's a, like I said,
a certain longstanding frustration
of not fully grappling with that fact,
having dorm room theoretical arguments about AGI.
You go back to that post over Christmas about like AGI in like 100 years and no one having
any jobs or being worthless or pointless or whatever, which included some implicit assumptions
around property rights existing in 150 years as they exist today.
Newsflash, if that happens, property rights as they exist today are going away.
All these rights.
This is a philosophical argument.
That's why I started with the international law concept.
All these rights, all these laws are subject to the agreement of those governed by them to follow them.
And the final say is those who successfully inflict violence.
And again, this isn't fun to think about.
It's not pleasant.
You like to assume we operate in a world of laws that everyone follows them and goes by them.
But to the extent AI is as impactful and powerful as it is, the more these questions,
fundamental questions that we thought have been settled for hundreds of years, if not thousands of
years, are going to be raised. And this is just the first of several episodes where I think that's
going to happen. I grew up in sort of like post-Cold War, no ducking cover, didn't have a lot of
fear of nuclear Armageddon, but Dario Amadeh is, you know, a fan of this book, the making of
the nuclear bomb. And it seemed like he sort of predicted that if,
If AI becomes super powerful, the U.S. might take a similar approach that they did with regulation of nuclear weapons.
And as I was thinking about that, I feel sort of good about the way nuclear weapons are regulated.
Like, I feel like we got the good ending and we haven't had nuclear weapons drop in 70 years.
And it seems like things are going well there as well as they can, considering that there's this amazing or, you know, tremendous, like, dangerous technology that exists.
but it hasn't been deployed.
It hasn't actually, you know, bombed anyone.
But how do you think he's processing that book?
How do you think we should be processing that idea
of the government running the same playbook
that they did with nuclear weapons?
It's pretty interesting.
I mean, on one hand, just from sort of a physical perspective,
dealing with weights and software
is very different than dealing with fissionable material
or I guess the super bombs are like,
they're actually like fusion devices, right?
And that is trackable.
It is interceptable.
You know when Iran to take a pertinent example
is trying to build enrichment facilities,
all of which makes the problem easier to solve.
Yeah.
So that's difference number one.
Difference number two,
and I really wish I would,
I had this included and I cut it,
sort of the article would be tighter,
But there is a very interesting point in technological history, which was the early days of Intel.
And Bob Noyes made the decision that we will sell to the government, but we're not going to design chips for the government.
And the distinction there was you had guaranteed orders, which was great.
The government would take your IP, and in his mind, the more important thing is there was limited volume.
And the way that he foresaw correctly that this was going to be a very upfront capital-intensive process of designing shapes.
If you have to design them, you have to do the equipment, all of which is in the billions of dollars today.
Back then it was in the tens of millions and hundreds of millions is you need to find the largest possible market, which was the consumer slash business market.
You design for that.
That will accelerate your improvement and your capabilities so much that you will end up having better devices than the government could have ever requested or made.
for itself.
That is at stake on steroids with AI.
People, like, I was talking to someone.
Why doesn't the government just get someone to make their own model?
It's like, because it's got, like, you talk about government contracts.
We're like single digit billions.
We're talking about for the amount that's going into CapEx, the cost of these models.
We're talking hundreds of millions of dollars for the models and hundreds of billions of dollars
approaching a trillion dollars a year in CapEx.
that is only sustainable and viable if you're selling to everyone.
But that introduces the entire new dynamics,
where the government built nuclear, it started there,
and it started with a lot of assumptions because it was a government program.
We are necessarily, for economic reasons,
because of all the upfront costs entailed,
starting with private companies,
of which the government is one of many customers.
And that introduces,
the assumption that, well, it's a private company with private property rights in all those
sorts of things, all of which I want to be true. Again, I don't like how this is going down at all.
The point here is to say there's a good reason why it's not going down that way. And there needs to be
cognizance that even though these is a private company that is building the model general purpose
and for very good reasons wants to put restrictions. Again, I think the same variance one is a very
powerful argument that I agree with. The problem is that you just need to be aware of, yes,
the government is a small customer. The government is also the entity, again, not to be, but
with guns. Like they, you know, like, why do I pay taxes? Because the law is to pay taxes.
Yep. No, at the end of the day, I pay taxes because, you know, if you really want to distill down,
if I don't, someone with guns will come to my house and throw me in jail. Right. Like, like,
we don't think about that. But at the end of the day,
where do these assumptions and laws and rights flow from?
And as long as that is still the case,
that it needs to be a decision-making factor for these companies.
How do you think this plays out for Anthropic?
It's such a small contract, but it's so important in the zeitgeist.
There's a lot of people that are rallying around Anthropic because of this.
There's a lot of people that are pulling away from Anthropic because of this.
It feels like there is a business to be built.
that doesn't work with the government,
but delivers coding models and knowledge retrieval systems
and a whole bunch of really valuable products and technology,
and it winds up being fine.
But at the same time, you don't want this, like, hairy relationship
with the government adversarial to go on for a long time.
I would like them to sell to the government,
and I would like Congress to pass the law addressing these digital surveillance issues.
Yeah.
And a lot of people are like, that's unrealistic.
which I have a medable to,
but at the end of the day,
if you don't have
it's legal or not legal
as your guiding standard,
the only alternative is
someone has to decide.
And the implication of that
not being a sufficient justification
is that means a private executive
is deciding.
Yeah.
And if AI is what it is,
I think that's going to be,
I use this word intolerable.
I didn't mean intolerable to me.
I meant intolerable
to those with power
to have a private executive
making those decisions or not.
And if you think about
if power, if we're going to have this very sort of
brute analysis that power
flows from
or laws flow from
power, AI is
a source of power.
So it's not just that, and I think this is
where the supply chain, again, which
I'm not endorsing, but I think
that's where the motivation is coming,
from, the goal isn't to
find we just won't use Anthropic.
I do think the goal is to hurt
anthropic. And if you're
not going to be subservient
to us, you're not going to
be allowed to build a power base, period.
And again, I'm not
endorsing all this.
It's just a matter of, it's not
a surprise this is happening.
Yeah. And
this needs to be just a
real risk factor, a real
that has to be considered in all these
decisions.
Putting on my
Dario hat, I'm thinking about
a different way to achieve the
goals with maybe less
sacramony. And I threw out
this idea that maybe the
better solution is like
work with the government, but
then lobby for
a surveillance act
and actually try and I wish the White House
would come out and say, yeah, there's a digital
surveillance problem. Let's work on a bit.
Like I don't, I don't, I don't, I'm
probably,
another regret I have is sort of putting this
all on Anthropic. That was sort of
the angle I was concerned about. And
that left me, I think, fairly
open to the critique that this is just like
defending the White House's approach.
And again, that was, I was trying to be a higher
level that's saying, look, this is what's going to happen.
But yeah,
I think there should be a way to find a
middle ground here. I'm just thinking of like, from the
perspective of like the, if the White
House is like this immutable thing, I mean, that
you are involved in, you know,
involved in anthropic, like one advice would be, hey, okay, instead of going and having this
confrontation with the government directly, go and start a political action committee that
lobbies for change in the way that you want through the democratic process.
Yes, that is the ideal process.
I understand why people are frustrated and skeptical about this.
I used to have this debate a lot in the context of antitrust and arrogating.
And one of my sort of the the the agriators and antitrust is that the antitrust laws are fundamentally unsuited to dealing with aggregators.
Because antitrust law has historically been about control of supply and the power of aggregators flows from control of demand.
And so you end up with all these solutions that I call pushing on a string.
You're just trying to get people to change how they behave.
And that doesn't work very well.
Like Google has always been right.
Competition has always been just a click away.
The problem is people aren't clicking.
And like, like, so the solutions focused on the supply angle doesn't work in a world where the supply is there, just no one's choosing it.
Yeah.
And therefore, my prescription is you actually need to pass new laws, not try to retrofit these old laws to this new use case where they don't work.
And the reaction is always, that's impossible.
We can't pass new laws.
And okay, but realize the implications of what you're saying.
I mean, I saw a tweet, again, I didn't like it, so I lost it forever.
It's one of the most infuriating things in the world.
But someone was like, I would definitely rather have Dario Amadee make these decisions than,
and to this tweeter's credit, he wasn't limited to Trump.
Because me, this isn't a Trump issue.
This is any politician issue.
He said, I would rather have Amade making these decisions than whoever comes out of our screwed up democratic process.
Yeah.
And points for the honesty, because that's the actual choice that is being,
put forward. And you can say Congress isn't going to do anything, therefore Amadei should,
just appreciate that is giving up on the democratic process in saying we should have unelected,
unaccountable individuals making weighty decisions. And again, I understand the sentiment. It's
hard to imagine Congress passing laws about anything. But just realize that's like that implication is
quite fraught. Yeah, it's a huge change from, I mean, I just spawn in and believe in democracy and then
understand it and study economics and just have reinforced my belief in the American project
throughout my entire career. And now it really is people discussing an entirely different
world of governance, which has been not something people have talked about publicly for a very
long time, but it is here for sure. Right. And they always come in on these Trojan horses
that are eminally defensible.
Again, I'm with Anthropic
on the digital surveillance point.
I've been concerned about it for years,
been writing about it for ages.
And it's similar,
there is an analogy to the monopoly.
You have all these laws that assume
someone has to actually physically go somewhere
and tap it to a phone line.
But if you can do it with computers at scale,
like suddenly you had all these assumptions
that limited what the government could do
that magically disappear,
not because the law change,
but because we got computers
that could do the job
of an individual at scale infinitely.
And AI, again, is going to
like the idea that the NSA, by the way,
this is my sort of like,
I had to admit this in the article.
I was so confused
why the Pentagon was so obsessed
with domestic surveillance.
I didn't realize the NSA was part of the
part of the Pentagon.
John and I had the same moment.
Yeah, yeah.
Yeah, you sort of thought about it
as like an independent agent, like the CIA.
But that made a lot of this story
make more sense.
Right.
No, exactly.
Yeah, I feel like a lot of tech people
are like reading the Fourth Amendment today
and understanding like some of these like pretty basic processes.
Well, yeah, but like it's pretty,
the loopholes are massive.
Like, I'm not denying it.
Like, and it's similar to the chip thing with China.
Like my prescription for Anthropic to give in
is to allow these massive loopholes to be exploited.
and for the NSA to allegedly in the service of investigating foreign adversaries,
but by the process basically surveilling the domestic population,
I think is bad.
And the reality is the nature of tradeoffs is you're choosing between multiple bad options.
And at some point, it's like, which team are you signing up for?
They both suck.
What do you think of the messaging around like the models themselves not being capable
enough to be used in the context that the Department of War asked for?
Because I felt like Dario was sort of speaking for all frontier labs.
He said that these technologies broadly are not suitable for these missions just yet.
I'm not sure that he has all of the information on the other side to know about the advocacy.
he certainly understands his models and what's capable in the frontier.
I mean, I would assume they're definitely not capable.
I think that point is more of a precedent-setting one.
I think Anthropics position is significantly weaker on that point.
Like at the end of the day, we either trust the military or not to make these sorts of decisions.
That's why we have a military.
And so I have a harder time.
And I think the digital sailor's point is so compelling for them because I think it makes it my personal biases.
Totally.
I think it's a huge problem.
Yeah.
The various anecdotes, again, I hate the reporting from these because you can tell like the leaks coming from which side for each of these.
Yep.
But, you know, this idea that putting forward these hypothetical examples of like, oh, you can call us and we'll figure it out then.
It's like, no, come on.
Yeah.
Seriously about this.
Like, like, so, yeah.
I think that's a weak argument for them.
So that's why I almost focused more
in the digital surveillance one,
just because I think it is a very compelling argument
in favor of the anthropic position.
Jordan, anything else?
Oh, there's a lot more.
What are you going to be tracking going forward?
Obviously, the stories about me.
Yeah.
Good long.
Stay strong.
No, I mean, the open eye angle is obviously interesting.
I didn't really get into open AI.
It's hard to parse exactly what's going on.
It seems to me they have agreed to the Pentagon that they will be,
the Pentagon will be limited by lawful capabilities.
And they make their own judgments about weapon usage.
And as I understand it, Open AI is like,
we will on our side be free to stop the model from doing digital surveillance.
which sounds like you're in sort of a jailbreak competition.
It's like we're going to agree to have a jailbreak competition with the U.S. government,
which, again, it's an example of how fraught this is that that's probably the good place to come down on.
Now, there's obviously these dynamics of competing for the same talent base, being in San Francisco.
You know, this is part of, I think Anthropics, Anthropic has a local advantage.
in that most people, I think, in the industry are with them,
and they have a national PR problem in that I think a lot of folks outside of tech
don't understand why tech companies always try to or resist helping the U.S. government.
And so it's kind of an interesting dynamic where I think OpenAI is in step with the broader public
and very much out of step with sort of their talent base in San Francisco.
And so that's going to be very interesting to see how that works.
plays out. Yeah. It's remarkable that Google has stayed out of the fray, given all the Project
Maven background and stuff. Like, they must be so happy that they're just like...
Well, that's the other thing is this actually goes back to Google, I believe, where Google had
the project... I think this is right. But I think Google had Project Maven, which their employees
objected to. Yep. And therefore, that went to AWS. Yep. And then some combination of, I
think the Pentagon is using Anthropic
because that's what ABA's using what?
An ABA has a higher FedRamp designation.
That's right.
And so that's why Anthropic was already
allowed for classified content and Open AI wasn't.
Again, I don't know the extent.
It was, I've studied Maven pretty closely.
It's a wild story.
I mean, it was similar like AI for the military,
the same like killer robot fears.
The actual, I mean, Google was a subcontractor
on that project.
and what they were actually exposing to the government was TensorFlow APIs
that would run on Google hardware.
And so they weren't actually writing any AI software,
but they wanted to effectively classify images from drones in the Middle East,
see, that's a car, that's a house.
And previously they had Air Force airmen just sitting there, like, clicking,
and they were like, okay, we're going to automate that.
Right.
But it was still, like, scary, don't be evil, working with the government,
military, and then there was a backlash.
They pulled out.
Then eventually they went back in
and had a new head of Google Cloud.
Yeah, I mean, this is, you know,
it's hard to, and I speak for myself personally.
I obviously have the biased angle because of Taiwan.
I have the biased angle where,
I think there are, you know,
just in general, there's this very naive view of the world
that doesn't understand why militaries
are important and necessary.
And I think Silicon Valley got itself in a lot of trouble
by giving in to this naive mindset
that we have no duty to support the military.
And there's this tension has been,
so it's a tension that's been brewing for years.
Yeah.
Which is, are you an American company subject to American law
and even beyond law,
just morally compelled to support the U.S. military or not?
And there's an equally American sort of idea of moral consciousness.
I'm able to say no.
That's why we have the First Amendment, right?
This goes into the, can the government compel a company to do something?
It goes back to some of the questions that happen, you know, with the first Trump administration.
And, you know, I've been on both sides of this.
Like, which I...
And this is what Daria said in CBS interview.
He said, we are a private company.
We can choose to sell or not sell whatever we want.
There are other providers.
He's already sort of like making this case.
Yeah.
Which, again, it is a case that.
that I support.
But the point here is,
there's always the question with like a bubble or whatever,
is it different this time?
And I guess that's sort of the question I'm raising.
Is AI actually applicable to every other technology that's come along?
Or if it is the potential to be a source of power going forward,
it's going to be dealt with as such.
Yeah, that makes sense.
Last question, we'll let you go.
how happy should Ted Sarandos be right now?
I mean, I think he had the killer quote
the last couple of days where I think so was asking you
if this is such a jewel and it's so rare,
like isn't it a problem that you're missing out on it?
And he's like, well, have you seen the history of Time Warner?
I think sounds about right.
I'm not sure an entity with all the debt
that Paramount Warner Brothers is going on,
like who else are content?
is going to sell to.
Yeah.
I feel like they sort of,
I feel like they've been spooked by YouTube a little bit,
and they felt a need to push forward.
Yeah.
That bring the,
future forward.
That was not allowed to happen,
but that means their original plan,
I think, still in place.
So,
um,
probably pretty happy,
all things considered.
I'm going to say,
that's great.
Well,
I'm,
I'm excited to get back to Netflix coverage and,
and,
uh,
more anodyy top marks.
Yeah,
remember it was on cheeky pipe.
You were talking about,
getting sucked into the idol.
And here we are.
So I put that quote at the beginning of my article.
You know, you may not be interested in politics.
It was politics and interesting you.
That was about anthropic and it was also about me.
Yes, yes, yes, yes.
Welcome to 2026.
Well, we thank you for taking the time to come chat with us.
Yeah, great to see you.
And a fantastic article.
We appreciate you, Ben.
Talk to you soon.
Thank you.
Have you.
Thanks for listening to this episode of the A16Z podcast.
If you like this episode, be sure to like.
comment, subscribe, leave us a rating or review, and share it with your friends and family.
For more episodes, go to YouTube, Apple Podcasts, and Spotify.
Follow us on X, at A16Z, and subscribe to our Substack at A16Z.com.
Thanks again for listening, and I'll see you in the next episode.
This information is for educational purposes only and is not a recommendation to buy, hold,
or sell any investment or financial product.
This podcast has been produced by a third party and may include pay promotional advertisements,
other company references and individuals
unaffiliated with A16Z.
Such advertisements, companies, and individuals
are not endorsed by AH Capital Management LLC,
A16Z, or any of its affiliates.
Information is from sources deemed reliable
on the date of publication,
but A16Z does not guarantee its accuracy.
