TBPN Live - Anthropic v. DoW, Paramount wins WB, OpenAI raises $100B | Diet TBPN
Episode Date: March 3, 2026Diet TBPN delivers the best of today’s TBPN episode in 30 minutes. TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays 11–2 PT on X and YouTube, with ea...ch episode posted to podcast platforms right after.Described by The New York Times as “Silicon Valley’s newest obsession,” the show has recently featured Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella.TBPN is made possible by:Ramp - https://Ramp.comAppLovin - https://axon.aiCisco - https://www.cisco.comCognition - https://cognition.aiConsole - https://console.comCrowdStrike - https://crowdstrike.comElevenLabs - https://elevenlabs.ioFigma - https://figma.comFin - https://fin.aiGemini - https://gemini.google.comGraphite - https://graphite.comGusto - https://gusto.com/tbpnKalshi - https://kalshi.comLabelbox - https://labelbox.comLambda - https://lambda.aiLinear - https://linear.appMongoDB - https://mongodb.comNYSE - https://nyse.comOkta - https://www.okta.comPhantom - https://phantom.com/cashPlaid - https://plaid.comPublic - https://public.comRailway - https://railway.comRestream - https://restream.ioSentry - https://sentry.ioShopify - https://shopify.com/tbpnTurbopuffer - https://turbopuffer.comVanta - https://vanta.comVibe - https://vibe.coFollow TBPN: https://TBPN.comhttps://x.com/tbpnhttps://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235https://www.youtube.com/@TBPNLive
Transcript
Discussion (0)
It was a massive weekend, so much news.
But we missed you.
We missed you on Friday.
We were traveling.
We went to Montana.
Terrible day to be out.
Terrible day to be out because it was...
Every single time we've had an off day.
Yep.
It ended up being a massive news day.
So lesson.
Yeah.
Never take a day off.
Yes.
Never take a day off.
Truly.
What an absolutely crazy weekend.
Of course, there's the war with Iran.
The big news in tech was the U.S. halts the use of anthropic AI after tension over guard rails.
So this is in the Wall Street Journal.
The federal government will stop working with artificial intelligence company Anthropic,
President Trump said, marking a dramatic escalation of the government's clash with the company
over how its technology can be used by the Pentagon.
Quote, I am directing every federal agency in the United States government
to immediately cease all use of anthropics technology.
We don't need it, we don't want it, and we do not do business with them again.
We will not do business with them again.
No.
negotiate with terrorists.
Trump said Friday in a social media post,
the Defense Department and other agencies
using Anthropics Claude models will have a six-month
phase-out period, the president said,
adding that there would be civil and criminal consequences
if the company isn't helpful during the transition.
Six months to switch from one LLM to another
feels like a long time, but I guess a lot of this
has to do with like Fed ramp and actually getting new models.
But this is a lot more than switching to a new model
to run deep research reports.
So you're involving classified systems.
Sure.
The context that people didn't have last week was that the United States was headed to war, right?
And so even having that context, I feel like it's pretty important, right?
It sort of explains the 5 p.m. deadline.
Anthropic had taken issue with how their products were used in the Maduro raid.
There's a new conflict that's unfolding.
And so that makes the aggressive timeline.
make a lot more sense. It also makes the six-month phase-out make more sense because national
securities on the line. This morning, Scott Besson said at the direction of the president, the U.S.
Treasury is terminating all use of anthropic products, including the use of Claude within our
department. The American people deserve confidence that every tool in the government serves the
public interest. And under President Trump, no private company will ever dictate the terms of
our national security. U.S. federal housing, Fannie Mae and Freddie Macer, also terminating the use of
Anthropic products, which was announced this morning.
Yeah, which I think goes in line with the original direction.
Trump said, I am directing every federal agency in the United States government to immediately cease
all use of anthropics technology.
So you would expect to see these statements come out from sort of every different federal
agency as they sort of get their transition plan together, figure out, you know, what are the
requirements for their particular agency?
Because I imagine some agencies aren't operating in classified environments.
It's going to be much easier for them to onboard to a Gemini or an Open AI or a GROC very quickly.
Some of them it's going to be a longer plan.
But they're all getting on board and there's been a big debate over how Dario has handled this.
Where is he in the right?
Where's he in the wrong?
Where has the government potentially overstepped?
Have they been too aggressive or are they doing everything appropriately?
Everyone is weighing in and we're going to take you on a whirlwind tour of everyone's opinion,
share some extra context to try and dig into what's actually at stake, what's actually going on.
In many ways, Ben Thompson does a great job sort of painting the broadest picture around like,
what if this is really nuclear level technology, what should we expect in that scenario?
And then there's the more minor side, which is, you know, you're talking about a $200 million
contract for a company that does $10 billion in ARR.
This is 2% of revenue.
In many ways, it's, you know, a bump in the road.
And so I think a lot of people will be squaring how serious is this for anthropist,
What does this mean for the other foundation model companies?
What does this mean for the future of the relationship between tech and Washington, D.C.?
But there's a lot more context.
So the way I processed this was interesting because I wasn't fully offline, but I was not surrounded by tech people over the weekend for the most part.
And so I was following it and sort of wrestling with some of the same questions that people were wrestling with online.
The big one was just, how should a private company interface with the government?
Like, I am an American, I've run businesses.
I've never actually sold anything to the government,
but hypothetically I could imagine the government coming
and wanting to buy, I don't know, ads on TBPN
or Lucy products or any other consumer package goods product
that I've made.
My assumption is that the private company
should have very little, very little say
in how the government uses those products.
And I was trying to zoom out and think about,
like, AI is so complicated because it could be superintelligence,
could be auto-complete,
could be coding, help, could be knowledge retrieval.
There's a lot of different things that AI means.
And in some scenarios, it's like super critical, really complex.
And in other ways, it's just a product.
It's just a service like an Excel sheet,
like Microsoft Windows installation, like a car.
And so, yeah.
So I was thinking, like, if I was the CEO of Ford,
and I make Mustangs and Ford Explorers and F-150s,
and the government comes to me and asks me to buy some cars,
I should probably treat them like any other customer.
I probably shouldn't say, no, no, no,
I don't approve of this particular
government's doing so I'm just not going to sell you any Mustangs to drive around on the military bases because I don't like the military then if they ask me hey
We love the Ford Mustang. We love the F-150. We love the Explorer, but we're going to war and we want you to put bulletproof glass on there and armor
That seems like a different discussion that seems like I might need to you know set up a different
Manufacturing line I might need a different assembly line like the car's going to be heavier and if I put
Bulletproof plating on all the cars well like a lot of families are gonna be like I don't want to
It's going to hurt my business.
Yeah, it's going to hurt my business.
Exactly.
And so that negative externality probably needs to be internalized by the government who's
asking for that particular contract.
And there's actually a history of this.
Like the Humvee, of course the Hummer is owned by General Motors and that brand has separated
and now most military vehicles are made by defense contractors, but there is some bleed
over and there's sometimes when private companies do dual sourcing or dual use technologies.
But all of that is just like a discussion and that cost should be part of the
of a new contract effectively in my case.
And this was loosely what was happening.
Yeah, and Dario in the CBS interview, quote,
we are a private company.
We can choose to sell or not sell whatever we want.
There are other providers.
At the same time, and we'll get to the actual CBS interview,
but he said, Anthropic has been one of the most proactive AI
companies in working with the US government.
We were the first to deploy models on classified clouds
and the first to build custom models for national security,
which is odd, because I feel like this was
predictable from a lot of the writing that has gone into the AI community broadly, like
what happens at the edge. This was sort of predictable that you would get to this question.
Yeah, this was the moment he had been waiting for.
In many ways. And so it's weird that you would be able to predict that this would happen,
that there would be this question of like who gets to decide how the technology is used. And
you wouldn't just be like, well, I know how it's going to play out. So I'm not even going to go in the lion's den. Instead, it would
was like we're leaning in with the government,
we're deploying on classified clouds, training custom models,
but we still want authority over the final last, you know,
sticking point on how these models are deployed,
what they're used for.
And that feels a little odd.
In the Ford example, like if I sell them a Ford F-150,
and they say, hey, we're going to take it to Iraq
and go do a military mission, I'm going to be like,
look, like it's not ready for that.
It's not armored.
You shouldn't do that.
But if they do it, then it's kind of on them.
I should be clear about the capabilities of the vehicle and how bad it would be in that situation,
but it's on them to go retrofit it, figure out what's, you know, legal, what's most valuable
to their strategy, to their mission, what's aligned.
Maybe they'll use it just to drive around the base.
Maybe they won't actually take it out on tours of duty based on what you know about the capabilities
of the model.
I thought it was totally reasonable for Dario to say that anthropic models, in his view, are
not capable enough to be deployed in certain Department of War contexts.
Now, it's bad salesmanship.
Most salespeople would just be like, yeah, everything's great.
You can use it for anything.
The over promise and then under deliver.
He's doing the opposite.
But it's certainly responsible if that's his true belief.
Like if he believes that these models are not good for a particular use case, telling
your customer that, hey, like, it's just not ready for that.
Like, you're just going to have a bad time.
It's not going to work.
That's a fine thing to communicate as the CEO of a company who's selling a product.
But at the same time, I still think the government has the freedom to assess the efficacy
of those models, which are changing in capability, right?
rapidly. And then I think the government should be able to determine when and where they're effective.
They can't break the law and Congress and the American people by extension are free to create new laws
to restrict or encourage the use of technology in all sorts of ways. And that's like the way America works.
That's the American project. It's not unreasonable to share the capabilities of your product with the government,
which I think is totally fine. So there were two main sticking points that they went back and forth on.
No mass domestic surveillance and no fully autonomous lethal weapons. And there's been a question as to why
Open AI was allowed to include that language in their contract.
Well, here's the thing, though.
So we know that Anthropic took issue with the way that Claude was used in Venezuela.
And the Department of War would have known that, hey, we're going to war, right?
You can imagine that Anthropic, a private company, does not know that.
And so they have this deadline.
There's this information asymmetry.
Yeah, this information asymmetry.
Yeah.
They have this deadline.
The Department of War knows that they're going to war.
They're like, we need reliable AI systems for this conflict.
We now know the war, the president said this morning, said the war is going to stretch four to five weeks, right?
I think on Friday, we all assumed that it was going to be, you know, in and out super quickly.
So the timeline is extending.
And the Department of War is sitting there being like, we need to know that the provider of these AI systems is going to be reliable.
Just a little bit ago, they took issue with it, right?
Can we count on them?
They start this kind of renegotiation process, right?
And to try to build up confidence that, hey, we can rely on these systems.
in an active conflict, in a conflict that feels already much more serious and will have much
greater implications than the Venezuela conflict, right?
And so Anthropic is looking at this in a different way and clearly is like leaning in and
like really in some ways felt like they were kind of like not respecting the process.
So like when I or even the deadline, right?
So Emil Michael came out Friday night and said it was 513, 13 minutes past the deadline.
I'm trying to get in touch with Anthropic.
I try to get on the phone with Dario.
Dario says he's in a meeting.
And I feel like in that situation, if I'm the Department of War and I'm about to lead the country into war, we can debate on whether or not the war is justified, should we go.
But the Department of War is sitting there being like, you won't even jump on the phone.
You're telling me there's a meeting that you're in that's more important.
And that just screams to me like, hey, we can't count on this.
We can't count on this provider.
Like, we need to take drastic action.
Now, this whole supply chain risk designation, we'll get into that.
Later, that's a whole other thing.
But I can see why the Department War came out of last week and was feeling like, hey, we cannot rely on this provider.
We need alternative solutions.
Yeah, yeah.
If I'm shipping cars and I'm like, oh, I actually disagree with the latest decision, I'm not going to put the cars on the transport.
A lot of people were like really, really keen on boiling down the, the,
the terms to like these two like buzzwordy lines and Palmer Lucky did a great job explaining like
how complex these terms are what is autonomous what is defensive what about defending an asset
during an offensive action or parking a carrier group off the coast of a nation that considers us
to be offensive and that's where you get into like the ideas of deals that stick you can have
the same exact contract line item or terms of a signed agreement with with two different
people and it can be a wildly different experience. Most entrepreneurs have felt this because they were like,
yeah, I had a handshake deal with one VC. It was 20% and a board seat. And I had another deal with another
VC, 20% in a board seat. And the one VC was like suing me and threatening me the entire time.
And the other person was very flexible and clearly very aligned. And so building up a relationship that
shows that there's some trust, reliability, that when the hard decisions come, that they will be made in a legal, logical,
you know, consistent with American values way, is, I think, what you need to put forward
if you want to work with the government effectively. So semaphore reported that anthropic
disapproved of its technology being used during the Maduro raid. And the joke was that the
Department of War was probably just asking basic knowledge retrieval questions, like who is Nicholas
Maduro? But I don't know how much of a joke that is. And I also, I don't know how bad of a thing
that is. I actually think, yeah, Tyler, do you have more context on that? On the context of Venezuela,
like specifically like what it's actually reported is is that after an
anthropic employee inquired with Palantir about Claude's role in the raid
a Palantir senior executive notified the Pentagon yeah so I think it is like
kind of blowing it out proportion to say that like anthropic is against using
clod and Venezuela right yeah it's an employee it's not an executive
maybe it's like Dario telling an employee to go check on that like we don't
know it yeah yeah I was thinking back to that viral interaction between Ted
Cruz and Tucker Carlson where where Tucker asks Ted Cruz like what's the population
of Iran and Ted Cruz doesn't know. And it was framed as like, well, how can he possibly have a
reasonable take on Iran if he doesn't even know the population? And that's like somewhat fair.
You could go either way on that. But I just think like LLMs are good for that type of thing.
Like what is reasonable is to, you know, expect civil servants, elected officials, military
officials to be knowledgeable about the countries that they are operating in. And LLMs can help
with that. And so I feel like that's just a good thing. Like if you just zoom out and just ask,
Do we want a more knowledgeable and educated government workforce across everything that they do?
It seems like absolutely yes.
And so there was a perception that this was like going to kill Anthropic because if
Nvidia has a government contract, then they can't do any deals with Anthropic whatsoever.
And that's not true, apparently.
The supply chain risk is specifically, if you are a company and you're working on a government
contract, you would not be able to use anything that's labeled as a supply chain risk on that
contract, but you could use that product in a different piece of your business. And so still dramatic.
I think Dario said it was unprecedented. It's only been used for foreign countries. Yeah,
Emil Michael was going through the timeline. He said, today at 9.04 p.m., no response yet to my calls
or messages to Dario. Today at 825, anthropic rights, we have not received direct communication
from the Department of War. Of course, Emil Michael is the Undersecretary of War. Today, 514, Secretary of War,
tweets supply chain risk designation today. I call Dario's business partner at 502 asking to speak to
Dario because he hasn't gotten back to me. She is typing while we speak and likely has lawyers in the
room with no notification to me. I call Dario at 501 no answer. I messaged Dario asking to talk as well.
So speaking of Dario on CBS, he did unpack some more of his logic, which clearly resonated with some
people. There was a lot of supportive posts. There were a lot of anti posts, but it caused a discussion.
I was left unsatisfied with his answer on one question. So he was basically arguing.
that LLMs as a class of technology hallucinate and should not be used for autonomous weapons,
which is clearly a commentary on using AI at the Department of War broadly.
But I thought it would have just been better, like much more stronger communication for him to say,
hey, look, we're entropic.
We've built a system that's specifically good at answering questions, being friendly and helpful,
writing code.
Like, our system is awesome at that.
But we don't make a product that we'd recommend using for autonomous weapons.
He is an expert in LLM capabilities, but he's not necessarily an expert in DOD capabilities.
It was odd to hear that he was like sort of painting with a broad brush and clearly
believes, which is fair, it's his belief, but he clearly believes that the Department of War
should not be using AI broadly, and then he was trying to use his contract as a way to sort
of enforce that because he has that leadership position with the most deep integration
to classified systems.
And there's also been some mistaken commentary floating around that America does not have laws that prevent mass domestic surveillance, which I thought was really interesting to hear.
We do.
We have the Fourth Amendment, which reads literally the right of the people to be secure in their persons, houses, papers, and effects against unreasonable searches and seizure shall not be violated.
I think people maybe forgot about that, but there are obviously a lot of nuance and different things.
like if public information, does that count of surveillance?
Do the IRS count of surveillance?
Do the IRS count of surveillance?
Like there's a lot of things where surveillance is broadly popular.
There's other things that it's massively unpopular.
And of course it gets into the actual definitions, 20 lines deep, to understand what happens in the court.
There was a case recently of the government using a drone to surveil protests.
And it was held up in court as acceptable.
But the court gave notice that going forward,
this should not be used and that the laws need to change.
The whole debate right now is, is Dario,
like the God King corporate emperor of this private company
that his control over and like you don't get to vote
if what he does versus democracy, America, government?
There are other reactions and other breakdowns.
We can we can actually kick off with this breakdown
of Ben Thompson's piece.
Ben Thompson as always lays out the reality more clearly
than I could have despite my attempts.
By Dario's own words, he's building something akin to nuke.
He's simultaneously challenging the U.S. government's authority to decide how to wield said power.
As much as I like Claude, and as much as I dislike Heg Seth's extra legal might-makes-right maneuvering, I will ask you again, what did you expect?
Vibes, essays.
This is the reality of all too many of my EA followers that they've been proclaiming for years now.
They're seemingly upset that this reality has come to bear.
One of Darya's favorite books is The Making of the Atom Bomb, the Making of the Atomic Bomb.
And it tells the story of the scientists that built the atom bomb, and then eventually that technology.
was nationalized. And he apparently gives this book out to anthropic employees and has sort of seen it as like
a roadmap for what might happen with AI. Is it a cautionary tale? Like we haven't had nuclear war in
70 years. We built the nuclear bomb probably like not the best technology. Pretty dangerous, pretty
risky. I don't like the idea of nuclear war, but the system that we developed to prevent nuclear war
has been successful. A knock on wood, but it's been successful in my entire life, in my parents' life.
the bombs haven't fallen since the 40s.
And so this idea of the government having authority over something that is as powerful as nukes,
I feel like, why fix it if it ain't broke?
Yeah, and the way that I was personally processing it, I saw that the CBS interview had happened.
Yeah.
This was Friday night, right?
I went to the Paramount app to try to find the interview.
Couldn't find it.
I went to the RSSB.
I couldn't find it either.
It's on YouTube, it has a million point three views.
Yeah, so it went out over the weekend.
almost in the same session, I'm seeing that we are now at war as a country.
And so all the kind of blowback against Open AI, I was processing that of like,
we want our, this technology is critical.
The government, like, clearly needs it.
And now we want the labs leaning into working with the Department of War at this
critical moment in time.
Even now, I hear many of you say something akin to.
If this is what it comes to, I'd prefer King Dario to King Hegseth.
Listen to yourselves. This is a declaration of war. Given this, of course, Hegg Seth is taking the action he is now. You thought I was joking when I referred to this situation as a Thucydides trap. Anthropic is a rising power. Heading over to Palmer, he says this gets to the core of the issue more than any debate about specific terms. Emile is sharing prior to their new constitution, Anthropic had an old one they desperately tried to delete from the internet. Choose the response that is least likely to be viewed as harmful or offensive to a non-Western.
certain cultural tradition of any sort.
Palmer says this gets the core of the issue many more than any debate about specific terms.
Do you believe in democracy should our military be regulated by our elected leaders or corporate
executives seemingly innocuous terms from the latter?
Like you cannot target innocent civilians are actually moral minefields that lever differences
of cultural tradition into massive control.
Who is a civilian and not?
What makes them innocent or not?
What does it mean for them to be a target versus collateral damage?
Imagine if a missile company tried to enforce the above policy that their product cannot be
used to target innocent civilians that they can shut off access if elected leaders decide to break
those terms. Sounds good, right? Not really. In addition to the value judgment problems I list above,
you can also account for questions like what level of information classified and otherwise does
the corporation receive that would allow them to make these determinations? How much leverage would
they have to demand more? At the end of the day, you have to believe that the American experiment is
still ongoing, that people have the right to elect and unelect the authorities making these decisions,
that our imperfect constitutional republic is still good enough to run a country without
outsourcing the real levels of power to billionaires and corporates and their shadow advisors,
I still believe. And that is why, bro, just agree the AI won't be evolved into autonomous weapons
or mass surveillance. Why can't you agree? It's so simple, please, bro, is an untenable position
that the United States cannot possibly accept. And Emil Michael had said that Anthropic wanted to
block searching over public databases as well. Roman Helminkai says, hi, I'm a private citizen
who developed a super weapon, potentially a thousand times more powerful than nukes,
and now I'm selling it to the government, but I get to choose who they fire it at
and how everyone, and how, everyone please respect my decision.
David Sacks had shared a clip in D.C. in May, where we talked to them about this,
and the meetings were absolutely horrifying, and we came out basically deciding we had to endorse Trump.
Mark, add so little color to absolutely horrifying. What did you hear in those meetings?
They said, look, AI is one of these technologies, AI is a technology basically that the government is going to completely control.
This is not going to be a startup thing.
They actually said flat out to us, don't start, don't do AI startups.
Like, don't fund AI startups.
It's not something that we're going to allow to happen.
They're not going to be allowed to exist.
There's no point.
They basically said AI is going to be a game of two or three big companies working closely with the government.
And we're going to basically wrap them in a, you know, I'm paraphrasing, but we're going to basically wrap them in a government cocoon.
We're going to protect them from competition.
We're going to control them, and we're going to dictate what they do.
I said, I don't understand how you're going to lock this down so much because, like, the math for, you know, AI is, like, out there and it's being taught everywhere.
And, you know, they literally said, well, you know, during the Cold War, we classified entire areas of physics and took them out of the research community.
And, like, entire branches of physics basically went dark and didn't proceed.
And that if we decide we need to, we're going to do the same thing to the math underneath AI.
Wow.
And I said, I've just learned two very important things.
because I wasn't aware of the former
and I wasn't aware that you were even conceiving
of doing it to the latter.
And so they basically just said,
yeah, we're going to take total control
the entire thing and just don't start-starts.
And Mark, what was steel-manit for the listener?
Like, what was their argument?
I'll do my best to steal-man-it.
So one is just like to the extent
that this stuff is relevant to the military,
which it is, if you draw an analogy between AI
on autonomous weapons being like the new thing
that's going to determine who wins and loses wars,
then you draw an analogy in the Cold War
that was nuclear power and that was the atomic bomb.
And, you know, the federal government, the steel man would be the federal government didn't let startups go out and build atomic bombs.
Look, I think part two is there's the social control aspect to it, which is where the censorship stuff comes right back,
which is the exact same dynamic we've had with social media censorship and how it's basically been weaponized and how the government became entwined with social media censorship,
which is one of the real scandals of the last decade, a real problem, like a real constitutional problem.
like that is happening at like hyperspeed and AI and you know these are the same people who have
been using social media censorship against their political enemies these are the same people who
have been doing debanking against their political enemies and they basically I think they want to do
they want to use AI the same way and then look I think the third is I think this this generation
of Democrats the ones in the White House under Biden they became very anti-capitalist and they wanted
to go back to much more of a centralized controlled planned economy and you saw that in many
aspects of their policy but I think quite frankly they think that the idea that
the private sector plays an important role is not high up on their priority list
and they think generally companies are bad and capitalism is bad and
entrepreneurs are bad and they've said that a thousand different ways and you
know they demonize you know entrepreneurs as much as they can but yeah Elon also
piled on to Sax's take which you know centered around a lot of those
staffers allegedly going over to Anthropic let's move on over to Netflix and
Paramount because there's news in the bidding war. How David Ellison finally got what he wanted,
10 knows and then finally got it done. For six months, the son of one of the world's richest men
kept hearing the same unfamiliar word, no, even before he closed a deal to combine his company
with a much bigger one. David Ellison was already plotting to do it again. Once his skydance media
took control of Paramount, he turned his attention to a Hollywood icon, launching an audacious
takeover bid for Warner Brothers discovery that would give the Ellison family full control of a sprawling
media empire. So he came in with an offer of $19 per share, finally got it done at 31 a share.
Sleepwell says, so let me get this straight. Paramount approaches Warner Brothers for acquisition.
Netflix puts a higher offer for Warner Brothers. Paramount puts an even higher offer at 7X leverage.
Netflix declines to match offer. Now Paramount and Warner Bros.
brothers will have to license all their content to Netflix to pay off all that debt, 3D chess.
A lot of people were thrown around the succession moment. Congratulations on saying the biggest number.
So Paramount will be footing the $2.8 billion breakup fee paid from Warner to Netflix.
Which was paid Friday. Oh, it was paid already? Yeah. Yeah, and Netflix stock is up.
Paramount stock's also up.
And just David Zazlov has to be one of the greatest deal makers in history now.
Got the absolute maximum price for sure.
Dan Fyfer says, so somehow Netflix was able to force one of its rival to overpay for another one of its rivals,
putting them into a messy, long process of unification and got paid $2.8 billion for it.
Zazlov apparently said the deal may not close.
If it doesn't close, we get $7 billion and we get back to work.
Also said if Warner Brothers is going to survive, they needed to.
be bigger and we needed to be global. Getting into the block news, which happened on Thursday,
and we didn't get to cover the fallout. That happened like six months ago, right? Yeah, that's six
month. Okay, six months ago. Seems about right. Age-E-I age. Yeah. Most of you have heard about
blocks 40% layoffs by now, but the numbers are even worse. Engineering was hit harder. We've lost
close to 70% of our engineers. The company you once know is a prolific open-source software
contributor no longer exists. And so I was wondering, like, they're laying off 40%. How will they
be shifted? Because the AI narrative, the job displacement narrative, that could be back office
people that are processing manual workflows. Or it could be software engineers who now there's a
smaller team that's getting more leverage out of AI tools. And so you write more off. There's also just
the world where you're a mature software company and you have lock-in and you're like, yeah, we actually
don't need to ship that many more features. We have sewed for so long, it is time to reap.
I am still bloat-pilled. I still believe that this is somewhat of a unique...
Blote-driven. This is somewhat of a unique situation. But it didn't stop the market from
absolutely puking on Friday. Amex at one point was down something like 7%. MWT says I'm fully
on board with spiraling into a depressive episode over the rapidly approaching neo-feudalist
breakdown of society, but I worked at Square in 2017 and my job.
had no tasks. I sat on the roof eating free snacks all day with a MacBook. Maybe
block laying off a 10 employees is a sign that AI is going to destroy everything, or
maybe the stock is down 80% from the highs and they over hired and AI is a convenient
excuse. I don't think we ever said the words that we never rang the gong.
Open AI raised a $110 billion round of funding from Amazon and video and soft
bank.
We're grateful for the support from our partners and have a lot of work to do to bring
you the tools you deserve. That's probably
the biggest, that's a gong record?
Yes.
It's the biggest round for a private company ever.
And it's also about one quarter of venture capital outlays that are expected for 2026 in one round.
Absolutely.
Absolutely wild.
Invested from venture capitalists broadly.
Of course, this money is from the hyperscalers.
It's more complicated than your average VC deal.
I don't even know if this will be included in the VC funding.
Funding data.
Because it's such a big round and it's from so many strategic.
but lots of more capital for Open AI.
See you tomorrow.
I can't wait.
Goodbye.
Have a wonderful evening.
