TBPN Live - Jensen on Dwarkesh, Cursor x XAI, Netflix Stock Sinks | Diet TBPN
Episode Date: April 16, 2026Diet TBPN delivers the best of today’s TBPN episode in 30 minutes. TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays 11–2 PT on X and YouTube, with ea...ch episode posted to podcast platforms right after.Described by The New York Times as “Silicon Valley’s newest obsession,” the show has recently featured Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella.Follow TBPN: https://TBPN.comhttps://x.com/tbpnhttps://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235https://www.youtube.com/@TBPNLive
Transcript
Discussion (0)
Absolutely wild podcast between NVIDIA's Jensen Wong and Dwar Keshe Patel.
So many clips, so much debate.
We're going to kick it off with the question of whether or not Nvidia is a car.
Let me start off with this vibe reel from semi-analysis because it is a treat.
Wait, and pause for a second?
You know why I was asking you?
Yeah.
Like what app you used to edit videos?
Oh, yeah, yeah.
It was to make this exact video.
And then I saw this and I was like, oh, I don't need to make it.
I don't, you're not talking to somebody who woke up a loser.
And that loser attitude, that loser premise makes no sense to me.
We are not, we're not a car.
We are not a car.
It's such a great edit.
It's absolutely, it's such a good vibe.
And it's so funny because, I love to it.
This is Tokyo drift.
It's funny because it's this new.
type of vibreel where the semi-analysis team, obviously, they recontextualized the two clips and added
the beat to get the video going. But that final edit with the cars sort of morphing together, like,
it goes from, there's a match cut. These are match cuts between one stick shift move and another,
or one car drifting and then another car drifting. That takes a really long time. Somebody clearly
made this video just because they're enthusiast of Fast and the Furious. And then the semi-analysis team
was able to quickly re-contextualize that to be about Jensen's answer on
NVIDIA's moat and the Kuta moat and what's happening with NVIDIA.
So let's go through this question because it's been it's been humming for a while
and it's sort of bubbled up most recently because there was a whole bunch of news
that Mythos might have been trained on Traneum, then maybe TPU, then maybe Blackwell,
and it was sort of a mix and it just feels like more and more of the AI labs are capable of
making other chip sets work in the early days.
It was all about NVIDIA, and now it feels like the incentives are really high to figure
other options out, and that creates a different competitive dynamic.
So we can run through my thoughts on this, and then we can go into the geopolitical
considerations as well, because that was another fascinating segment of the interview.
So Jensen, the CEO of NVIDIA, spent over 90 minutes.
It was almost two hours when you include the ads, too, in the ring with Dwar Kesheh Patel.
It was electric, and the key question, at least for the start, was whether or not
Nvidia is a car.
And I'm only half joking about that.
It sort of was the key question.
Nvidia has been the market leader for years.
During the gaming boom, Nvidia was the gold standard for rendering PC graphics.
There was always decent competition from AMD.
But once the AI boom kicked off the Kuda ecosystem, significantly sped up development of AI
systems and training of AI models.
And for those who aren't familiar, Kuda is a programming model that enables GPUs to accelerate
demanding workloads by parallelizing computation. So instead of needing to work on the very low-level
instruction sets, if you are a Kuda kernel engineer, Kuda engineer, you can access all the power
of the GPUs very efficiently while staying up in the more mathematical AI research, more
standard Python C++ programming paradigms. You don't need to dip down too low. But it's getting
easier to dip down lower, and that's what we're seeing. So that created the Kuda mode, because developers
were way more productive and the biggest bottleneck to progress was allowing AI researchers to
quickly test ideas and scale their experiments up to whole fleets of GPUs and eventually entire
data center. At the time, researchers really liked Kuta and really liked NVIDIA, and they did not
want to have to spend hours and hours figuring out other hardware systems. They just wanted to
run their tests and see if the model was getting better and continue to scale.
But recently, the biggest cost center for AI labs flipped from researcher time, more or less,
to compute capacity. And this creates a much larger economic incentive to figure out a way
to drive down the cost of chips used to train AI models. I wrote about this back in,
on Tuesday, October 22nd of last year. So I said, not every link in the supply chain of
AI can be completely commoditized. Invita has
insane amount of power, having ramped full year revenue over the last three years from 27 billion
to 60 billion to 130 billion. Absolutely insane top line revenue ramp at that scale. And that's
why Jensen is so confident about how dominant this business is. It's the world's biggest company
for a reason. It has been growing spectacularly at immense scale. And not only did they grow the
top line, but net profit margin grew insanely. So it grew from 16% to 56%. Yeah, you do that when you
have pricing power and massive leverage because, you know, in this case, demand massively outstripping
supply plus developer love and kind of just like. And I believe the forecast net margin was
70 something percent. We were hearing about 80 percent potentially. And so Nvidia, you know, the plan,
and the plan is still to make an incredible amount of profit off of these chips because they are
incredibly valuable. But all the hyperscalers and the AI labs, they are sort of incentivized now
to form a bit of an anti-NVIDIA alliance to commoditize the accelerator market and drive down
those margins at least a little bit. And so today the AI chip market is starting to look
much less monopolistic. AI coding agents can make it easier to write software that works on non-KUDA
chip stacks. And the teams behind competing chips have plenty of resource.
and economic incentives to bring performance in line with NVIDIA, even if it's going to be a big
hassle, even if you're going to need to spin up a team to get AMD or TPU working, it's going
to be worth it because you're talking about billions and billions of dollars spent on chips.
For the past few years, Nvidia.
Yeah, another example of, like, an instance where an AI lab had so much urgency that they
were willing to spend whatever it took was meta-rebuilding, like, yeah, yeah, exactly.
Like, they were willing to outspend pretty much any other lab on talent.
because they didn't have time to find a homegrown talent
or go through a normal recruiting process,
especially considering a lot of those engineers
were happy doing what they were doing.
Yeah, yeah, yeah, exactly.
Yeah, the incentives flip when you get to this scale
or the incentives just get so big you can build a whole team
for a specific thing, solve any problem.
And so for the past few years,
NVIDIA has sort of looked like SpaceX's launch program.
It's an incredible technology with very few viable alternatives,
and so that creates great margins,
as we've seen with SpaceX's launch capacity,
and they control something like 90% of the launch market.
While the products have not degraded, quite the opposite, actually,
Blackwell and Vera Rubin are incredibly powerful chips.
They're clearly on the leading edge there,
and they have an incredible amount of supply chain guarantees from TSM and across memory
and all the other different pieces of supply chain,
like Nvidia is ready to make more chips,
but increased competition makes the category look a little bit more.
like the car market than the rocket launch market.
And so that, I think, is where Jensen is pushing back and saying, no, there's a lot more
that we bring to the table with our customers that don't think of us like a car.
This is not, you know, the difference between a Ford and a Toyota.
They all sort of get you to the same place, and you can swap one out.
You can be driving a BMW one day, go to the dealer, turn it in for a Mercedes, and you're
going to have a pretty...
Yeah, or the other example that I was using with the team earlier was this idea of, like,
Like if you're a delivery company like FedEx and you have a lot of like Ford, Ford vans.
And then Hyundai comes to you and says, hey, you've been spending like $50,000 per Ford van.
But like would you consider a Hyundai van?
Yep.
We'll sell it to you for $35,000.
It's just as good.
And it could be like mildly inconvenient for the company because like they're kind of used to using Ford.
They maybe have like an internal team that does some maintenance.
but when you start looking at that cost differential,
it can start to get pretty interesting to say, like,
hey, why don't we try out some Hyundai's?
Let's, like, move over some of our routes to Hyundai's
and, like, see how that goes.
Sure.
They try it, and they're like, hey, this actually, this works pretty well.
Maybe there's, like, you know, higher maintenance,
but it actually, like, masks out.
Yeah.
And, like, we're going to actually start adding more Hyundai's to our overall fleet.
Yep.
And so Jensen's point on the podcast is that, like,
we're not selling cars, right?
This is not, like, something like you can kind of
swap out.
Yeah.
And Dwark Hatch was obviously pushing back.
Yeah.
And for a lot of companies, swapping out Nvidia for TPU would be very difficult.
Some workloads that Jensen focuses on.
He says, we're not a tensor processing unit.
We're an accelerator.
There's a whole bunch of scientific computing workloads that work particularly well with
Nvidia.
The problem that.
And of course, Gorsh was just saying, like, yeah, well, it's perfectly fine to have, like,
a specialized chip for specialized workflows because the biggest companies in the world,
the biggest buyers here have like a single type of workload that they're trying to do.
And that's why they're using pure competitors.
Yeah.
Because the AI buildout does not seem to be slowing down as long as power can continue to be
brought online and data centers can continue to find towns that will approve of them,
demand for chips will presumably grow.
But every chip designer and AI lab has to be praying that those net margins come down.
How quickly will it happen very early to tell?
The market did not react negatively to this back and forth.
Although the price of Nvidia has been basically flat since last August.
And I think that's why Jensen is trying to sort of like reset on the narrative
because it's possible that, you know, with the, there's a lot of movement up and down.
They've been sort of flat.
There's a desire to sort of like reset and re-contextual.
Yeah.
Inference demand is scaling.
Yeah.
In a very significant way.
Yep.
And the Nvidia stock price is relatively flat.
Yeah.
And there's, and since August,
There's been so many different moments where fears of the AI bubble,
fears of the products not finding product market fit,
that revenue might stagnate.
Like, we've seen a ton of bullish signals for, like, AI demand broadly.
Demand is there.
And so if you're a supplier, you should also be going up.
But there's been this overhang of what will happen to margins and market structure.
And so that is what people are going back and forth on.
Tyler, did you have anything else thoughts on this?
Personally, I think I probably am more.
on the Dorcasch side where like yeah I mean at some point like these margins are so high
there's so much opportunity here we've seen that like actually if you're a big lab you know it's
a lot of resources yeah but you actually can just like train a model on a different architecture uh you can
serve them on a different architecture yeah like you actually can figure these things out as models
get better you can go lower and lower lower on the stack yeah you can you know you can write the kernels
yeah semi autonomously these things get faster and faster and it's like I'm probably much more on the
door cache side yeah it'll be interesting to see how grok fits into this the new CPU
between different pieces of the puzzle.
And then also Dwork Keshe makes this point that the supply agreements that
Nvidia has might be a bit of a moat for the next few years while TSM line time is
so constricted.
Yeah, but even then, like, you know, Jensen was saying that kind of supply constraints, this
is like a two to three year problem after that, like you can just solve these things.
Like I don't know how to think about this because, you know, to me it seems like, yeah,
so much of the value of Nvidia is just like they have such an incredible relationship with
TSM.
Yeah.
But if, and it's so valuable because of how constrained it is.
Yeah.
But if that kind of constraint is maybe going to, you know, go away to some extent.
Yeah.
TSM.
Yeah.
That's what he says.
Like they're going to increase, you know, they can build a new fab and whatever three,
three years.
I mean, the big, the big, the big takeaway from the conversation is like you have one
person who seems incredibly AGI pill, which is Dwar cash.
Yeah.
And then you have Jensen who doesn't seem AGI pilled in that sense at all.
Yeah, yeah.
Right.
Dworkesh asked about, he was kind of getting it early on the idea of like,
will you be able to just like prompt your way to Nvidia chips?
He's like, you basically sell software.
Yeah, yeah.
And Jensen was basically like, no, I don't think that'll happen.
And then when you get to the whole geopolitical conversation too,
again, it's like, Duar Kesh is like you're selling nukes.
And Jensen is, in my view, like, I'm selling computers.
And that was like the big rift.
It was like these two kind of totally conflicting world views.
And it made for some very...
Paper view.
It was a paper view.
It should have been a paper view.
Dr. Ram Krishna summed it up pretty well.
He said every person hears reaction to the Jensen plus Dorkesh podcast can be extrapolated
directly from whether they believe in the frontier labs achieving short timelines for AGI ASI.
If you believe in the labs achieving RSI and then AGIASI for some definition of all three in the next few years, you're probably sympathetic to the frame Dorcasch adopts.
If not, you're probably more sympathetic to the arguments from Jensen.
And so we can go into the export controls next and talk about that.
Metacritic Capital sort of summed up a little bit of why just Jensen's rhetoric and how he wasn't conceding a lot of things.
Dean Ball said Dorcas Jensen reveals how inconsistent and unbattle to.
tested AI acceleration talking points are, especially when they're filtered through the prisms
of corporate comms and mass politics. Strategically coherent accelerationism is possible.
He says, I try, but not currently prevalent. And Metacritic Capital says, the problem is that
Jensen doesn't concede anything. Compute spending going to the moon, one trillion revenue
in sight, models keep getting better, no unemployment, software codes are good, other Western
accelerators are bad, Chinese competitors are good, and VDIA makes token costs decline 90% per year.
but Chinese compute scientists are capable of making all the necessary algorithm improvements.
He also can't be AGI-pilled enough because at the end of the day, he is an intellectual property
company in the business of sending a file to TSM.
I think it's part of Taiwanese culture to want to be loyal to all clients and don't have
favorite winners.
He doesn't want to betray his software co-customers.
He has antitrust concerns.
Yeah, he was making the bulk case for software, which was that AI agents are going to use tools.
So he's like, there's going to be more users of software than ever.
Which is something I'm like somewhat sympathetic to, but yeah, it definitely...
It's still very easy to take the counter.
The flip side of that, yeah.
Well, let's play the distilled recap from Dwarkesh Patel of the back and forth with Jensen on export controls.
It's about four minutes and we'll watch this and then discuss.
If Chinese companies and Chinese labs and the Chinese government had access to the AI chips
to train a model like Claude Mythos with these cyber offensive capabilities and run millions of instances of it with more compute,
The question is, oh, is that a threat to American companies to American national security?
First of all, Mithos was trained on fairly mundane capacity and a fairly mundane amount of it by an extraordinary company.
And so the amount of capacity and the type of compute that it was trained on is abundantly available in China.
And so you just have to first realize that chips exist in China.
They manufacture 60% of the world's mainstream ships, maybe more.
It's a very large industry for them.
They have some of the world's greatest computer scientists.
As you know, most of the AI researchers in all of these AI labs, most of them are Chinese.
They have 50% of the world's AI researchers.
And so the question is, if you're concerned about them, considering all the assets they already have, they have an abundance of energy, they have plenty of chips.
They got most of the AI researchers.
If you're worried about them, what is the best way to create a safe world?
Well, victimizing them, turning them into an enemy likely isn't the best answer.
They are an adversary.
We want the United States to win.
Having a dialogue and having research dialogue is probably the safest thing to do.
This is an area that is glaringly missing because of our current attitude about China as an adversary.
It is essential that our AI researchers and their AI researchers are actually talking.
It is essential that we try to both agree on what not to use the AI for.
With respect to China, we want to have a.
Of course, we want United States staff as much computing as possible.
We're limited by energy, but we got a lot of people working on that,
and we had to not make energy a bottleneck for our country.
But what we also want is we want to make sure that all the AI developers in the world
are developing on the American tech stack
and making the contributions, the advancements of AI,
especially when it's open source available to the American ecosystem.
It would be extremely foolish to create two ecosystems,
the open source ecosystem and it only runs on the Chinese tech, a foreign tech stack,
and a closed ecosystem, and that runs on the American tech stack.
I think that that would be a horrible outcome for the United States.
I mean, I think the concern going back to that flop difference in the hacking is,
yes, they have compute, but there's some estimates that because they're at 7 nanometer,
they don't have UVs because of chip making expert controls.
The amount of flops they're about to actually produce,
they have like one-tenth the amount of flops that the U.S. has.
And so with that, could they train a very much?
eventually a model like mythos? Yes, but the question is, because we have more flops,
American labs are able to get to these level of capabilities first. And because Anthropic got to it
first, they say, okay, we're going to hold onto it for a month while all these American companies
we give them access to it. They're going to patch up all their vulnerabilities. And now we release
it. Furthermore, even if they trained a model like this, the ability to deploy it at scale,
you know, if you had a cyber hacker, it's much more dangerous if they have a million of them
versus a thousand of them. So that inference compute really matters a lot. In fact, the fact
They have so many researchers are so good is the thing that makes it so scary,
because what is it that makes those engineers more productive is compute?
We should always be first, and we should always have more.
But in order for that outcome for what you described to be true,
you have to take it to the extremes.
They have to have no compute.
And if they have some compute, the question is how much is needed.
The amount of compute they have in China is enormous.
I mean, you're talking about the country.
It's the second largest computing market in the world.
if they want to deploy, aggregate their compute.
They've got plenty of compute to aggregate.
Very, very tense back and forth.
Dean Ball says, it's a shame Jensen mostly fails here
because the monoculture on export controls is bad.
If you're a young AI policy researcher
trying to make a name for yourself,
it's almost impossible to be taken seriously
unless you are pro-export controls.
Monocultures are usually bad.
And I am sympathetic to Dorcasch's points there for sure.
especially on the inference side, even if models exist in both worlds,
like having a whole bunch of good guy compute that can go and patch bugs,
while the amount of attackers is much smaller.
It's just a matter of how many resources you have on each side.
That's a great point.
The only thing that they are talking around is just Taiwan as a particular turning point
and how their various positions flow through to Taiwan.
policy and where the how China stance on Taiwan is something that I've always
puzzled and I wish that both of them had articulated their sort of philosophy on
actually wargaming out what export controls do to likelihood of Taiwan
intervention or blockade or anything like that I don't exactly know I've
been trying to like work through it but I don't have a complete thesis but we've been
debating it back and forth all day I don't know if you have a strong take on any
of this, Jordy? I appreciate Matt Zytland's point. He said, I kind of appreciate that Jensen
Fong seems relatively normal about non-business stuff compared to other tech founder, CEO
types, but then when it comes to NVIDIA's actual operations, he's a complete sicko.
Yeah, I saw a couple takes around this, that he had like some very, very, very, very strong
points. If you're deeper in the supply chain, I haven't, I couldn't really assess how he did
on that. But some people were, there were definitely people that were. There were definitely people that
were in Jensen's camp. It was divided, which I think is why this went so viral. Jensen Wong and Dworkesh
today. Most combative interview he's done in a while. The biggest regret not funding. Has there been
a more combative interview ever with Jensen? It seems unlikely. I think so. There is some funny
details in here. Apparently the Larry and Elon begged Jensen for GPUs at dinner story. That never
happened. We absolutely had dinner. At no time did they beg for GPUs, which is funny. I wonder what
That would happen. Dorcasch posted tomorrow, and it's him and Jensen standing next to each other.
And Alex Volkov said, hey, Dorcas, was this picture taken before or after the pod?
Because it does feel like it was a tense situation.
Although, to both of their credit, like Jensen, it felt like he loved being in the arena,
getting asked hard questions, like working through this.
There's this back and forth where Jensen's pushing back and Dworkesh says,
I can drop it.
And he says, you don't need to drop it.
I'm enjoying this.
Like, let's hash this out.
And I thought that was very diplomatic and just good overall.
Well, Intel is up on the news, up 4% today, 10% over the past five days.
Almost at all-time highs, I think we're very close to the 2000 peak for Intel,
which it was also around where they were trading in 2021.
$67 a share, $330 billion company.
Clearly, with all of this backdrop and just the idea,
of more chips and maybe the CUDA ecosystem being something
that you can work around, can an American fab
run by Intel produce a chip that's viable for an AI lab?
It feels like increasingly yes,
that's certainly the argument that's being put forth
by Dorcash, and it would be very exciting.
I think it's something that everyone would support
an Intel resurgence.
There's some news around TerraFab potentially
getting involved.
But first, let's start with the scoop from Grace K
over at Business Insider.
She says, Scoop, Cursor plans to use XAI's infrastructure
to train its Composer 2.5 Coding Law.
Where's the Golden Scoop, Tyler?
According to people familiar with the matter.
Cursor will use tens of thousands of XAIs GPUs, they said.
And we got a scoop for Grace K.
This one's going to Grace.
Grace, congratulations.
You win the Golden Scoop of the day.
Congratulations.
Interesting to see somebody we talked about.
probably midway through last year, XAI has shown a tremendous ability to,
on the kind of infrastructure data center side spinning up a huge amount of compute very, very quickly,
ahead of any timeline that any reasonable party would have probably expected.
And demand hasn't exactly followed in the way that they would have liked.
And so opening that up to a company like cursor who has all the demand and what they
really need is their own model.
Yeah, it was also interesting because I don't know if it was thrown out as a potential project
for other companies.
I feel like MSL mentioned it at some point, maybe OpenAI.
There was some talk of like, okay, if you're marshaling all this compute and you wind
up with too much, like, what do you do then?
And the idea of becoming a cloud provider if you have a data center and everything
that's been the big question of like, you know, with everything that SpaceX is doing
and now TerraFab, they're going to be creating all this capacity.
Where's the demand for that capacity going to come from?
And so you can imagine a world in the future
where if SpaceX has a bunch of space data centers,
they open up that capacity to a bunch of companies
other than just Elon Inc. businesses.
So Grace says the setup effectively turns XAI into a kind of cloud provider
by renting some of its GPUs to other companies.
XAI could start generating revenue from its massive infrastructure.
while still developing its own AI models.
The arrangement could help the company offset the costs of building and operating data centers
while also deepening ties with a startup that has access to valuable coding data.
And so there could be some sort of trade deal going on.
Ed Ludlow at Bloomberg has a report from the TerraFab.
Must Team is actively requesting price quotes and delivery timelines for a wide range of chip-making equipment,
photomask, substrates, etchers, deposition, cleaning, testing tools according to sources.
Elon Musk's lieutenants have reached out to chip industry suppliers for his envisioned TerraFab project.
Remember, he was pictured with Lib Bhutan from Intel, I think last week.
Early steps in an audacious and likely arduous attempt to break into the production of cutting edge chips.
That is a very, very tall order.
But maybe there's never been a better time to break into the cutting edge chip market, given that you don't need to, I mean, you sort of need to reinvent Kuda, but it's becoming easier potentially.
Well, there are two big releases from Anthropic and Open AI today.
Claude announced, Claude Opus 4.7, our most capable Opus model yet.
It handles long-running tasks with more rigor, follows instructions more precisely,
and verifies its own outputs before reporting back.
You can hand off your hardest work with less supervision, they say.
Very good score on SweeBenzhou, Pro, 64.3%.
Excited to test it.
They say Opus 4.7 has substantially better vision.
You can see images at three times the resolution and produce higher quality interfaces,
slides and docks as a result.
Yeah, I think the most notable thing here is you have a model card that shows a model
that's not publicly accessible.
Yeah, they share the mythos benchmark.
Mythos, logging, Opus 4.7 just sitting there, but of course, unless you're one of
the select companies, you won't be getting access to it at least yet.
Well, Open AI announced codex for almost everything.
It can use apps on your Mac, connect to more of your tools, create images, learn from previous actions.
Remember how you like to work and take on ongoing and repeatable tasks.
With computer use on MacOS, Codex can now use any app by seeing clicking and typing with its own cursor.
It runs in the background without taking over your computer, working on tasks like front-end, iteration, app testing, or any workflow that doesn't expose an API.
You can now generate and iterate on images with GPT Image 1.5 and Codex to create front-end designs,
mock-ups, game assets, and more without leaving your workflow.
Usages included in your ChatchapT account, no API needed.
Automations can now run in the same thread.
Lots of updates here.
And Tebow says, Codex just got a lot more powerful.
Computer use in-app browser, image generation, editing, 90 plus new plugins to connect everything,
multi-terminal SSH, lots and lots of stuff.
So go give it a test, go take it for a spin.
OpenAI.com, of course.
And you can download it for MacOS.
The West creates the internet.
Try nailing that jello to the wall.
CCP nails it.
The West creates LLMs.
All right, try nailing this.
And the CCP picks up a hammer.
There's a piece in the Wall Street Journal opinion section.
AI is bound to subvert communism.
This is a very contrarian take because people,
at least with the internet,
was the perception of like decentralization, permissionlessness, anonymity, a lot of things that felt
very democratic. AI is very centralizing by default. This is the teal take of like AI is communist
and crypto is libertarian. And so this is a pretty wild thing to argue. But you know, read the
opinion piece and see what they say. The nail nailing jello to a wall, I believe that's from the
Clinton administration, the idea that the internet would spread so widely that the Chinese Communist
Party would not be able to control the population, everyone to be coordinating. It'd be sort of like
an Arab Spring type moment. But of course, the firewalls went up, the surveillance happened,
and nothing really changed, and the Communist Party seems stronger than ever. But this does
sort of undergird a lot of what Dorcasch has been saying about the risk of China and having
strong AI and stronger control over the population. It's always hard to get a read on exactly how
things are rolling out in China. There's some people that seem to like it over there. Of course,
Dorcasch took a whole trip to China and made it back, okay. So it's not all doom and gloom,
but it is a, it's a tricky, tricky thing to argue. But we'll see. There's a ton of breaking news.
The big one is... Reed Hastings.
Read Hastings.
Stepping off the board of Netflix and the stock is down tremendously. But this is good.
He's not stepping off the board. He's stepping off the board. He announced that he's
stepping off the board. But yes. I mean, it's saying.
No, but, but again, it's good because it's good for Reed specifically. Oh, yeah.
Because it shows that people have confidence. Oh, sure, sure. Sure. Sure. I would expect Netflix
to make a quick recovery. But yeah, it's up 12%, 13% in the last month, down eight and a half
percent overnight after hours. But we'll see where the stock settles.
Alternative is a nightmare for Reid because if he announced this,
announce this and the stop popped 20%, he was a handicap, handicapping the company.
And I mean, the flip side is that Ted Zirandos, it seems like he put on a masterclass over the last six months
with the paramount negotiations, not getting over his skis.
The shareholders wound up really liking how that all penciled out.
And so it seems like the company's in good hands and all of the different strengths that Netflix has,
continue to show across advertising and subscriptions.
And the big headline with Netflix
is that they've kept their content budget,
essentially flat or slightly growing,
while they've grown subscriptions and revenue
and top line very precipitously
and very consistently, even at a time
where they haven't needed to invest exponentially more money
in content.
Obviously, they spend a fortune on it,
but it's not growing as fast as their revenue is growing,
so their profits are growing, which is good news.
Hastings' departure,
marks the end of an era for Netflix, which under his leadership transformed from a DVD by mail
business to a juggernaut in subscription video streaming and disrupted Hollywood. He said,
my real contribution at Netflix wasn't a single decision, Hastings said in a statement.
It was a focus on member joy building a culture that others could inherit and improve and
building a company that could be both beloved by members and wildly successful for generations
to come. Well, we wish him the best
on his next chapter, whatever he winds up doing.
What an absolute run.
And lots more stories to talk about,
but we will be back with you on Monday at 11 a.m.
It's been an honor and a privilege.
And we'll see you.
And we'll see you.
See you with you here today.
Leave us five stars on Apple Podcasts.
Of your day.
Sign up for our newsletter at tbPN.com.
Goodbye.
Throwing flashbang.
Throwing flashbang.
