All-In with Chamath, Jason, Sacks & Friedberg - DeepSeek Panic, US vs China, OpenAI $40B?, and Doge Delivers with Travis Kalanick and David Sacks
Episode Date: January 31, 2025(0:00) The Besties intro Travis Kalanick! (2:11) Travis breaks down the future of food and the state of CloudKitchens (13:34) Sacks breaks in! (15:38) DeepSeek panic: What's real, training innovation,... China, impact on markets and the AI industry (50:14) US vs China in AI, the Singapore backdoor (1:01:51) OpenAI reportedly in talks to raise ~$40B with Masa as the lead investor (1:10:37) DOGE's first 10 days (1:25:13) Future of Self Driving: Uber, Waymo, Tesla (1:38:04) Fed holds rates steady, how DOGE can impact rate cuts (1:44:17) Fatal DC plane crash Follow Travis: https://x.com/travisk Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect Referenced in the show: https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf https://www.tomshardware.com/tech-industry/artificial-intelligence/chinese-company-trained-gpt-4-rival-with-just-2-000-gpus-01-ai-spent-usd3m-compared-to-openais-usd80m-to-usd100m https://www.cnbc.com/2025/01/27/nvidia-sheds-almost-600-billion-in-market-cap-biggest-drop-ever.html https://x.com/shrihacker/status/1884414667503853749 https://x.com/balajis/status/1884975064283812270 https://www.fool.com/earnings/call-transcripts/2025/01/29/meta-platforms-meta-q4-2024 earnings-call-transcri https://x.com/mrexits/status/1885017400308806121 https://www.wsj.com/livecoverage/stock-market-today-dow-sp500-nasdaq-live-01-28-2025/card/deepseek-s-ai-learned-from-chatgpt-trump-s-ai-czar-says-LoCYvz2Lm0riS0AuEoB5 https://www.wsj.com/tech/ai/why-distillation-has-become-the-scariest-wordfor-ai-companies-aa146ae3 https://techcrunch.com/2024/12/27/why-deepseeks-new-ai-model-thinks-its-chatgpt https://x.com/rauchg/status/1875627666113740892 https://www.ft.com/content/a0dfedd1-5255-4fa9-8ccc-1fe01de87ea6 https://x.com/satyanadella/status/1883753899255046301 https://en.m.wikipedia.org/wiki/Jevons_paradox https://x.com/pitdesi/status/1883192498274873513 https://x.com/rihardjarc/status/1884263865703358726 https://x.com/austen/status/1884444298130674000 https://www.cnbc.com/2025/01/30/openai-in-talks-to-raise-up-to-40-billion-at-340-billion-valuation.html https://x.com/america/status/1884372526144598056 https://x.com/DOGE/status/1884396041786524032 https://fred.stlouisfed.org/series/FYFSD https://www.whitehouse.gov/presidential-actions/2025/01/establishing-and-implementing-the-presidents-department-of-government-efficiency https://x.com/Jason/status/1884671945800573018 https://abcnews.go.com/538/trump-starts-term-weak-approval-rating/story?id=118146633 https://www.cnbc.com/2025/01/15/cpi-inflation-december-2024-.html https://x.com/chamath/status/1885068981905875241
Transcript
Discussion (0)
All right, everybody, welcome back to the all in podcast. We've got an incredible crew today.
Don't forget to go to our YouTube, blah, blah, blah, subscribe. And make sure you check out
freeburg surprise drop with his hero, Ray Dahlia live on all platforms today. How did that come
about freeburg little surprise drop? Just great was talking with Ray about his new book, which
he just published on how countries
go broke, obviously.
Which country is going broke now, Fieber?
America.
Well, I think he talks a lot about the historical context of what's gone on with the debt cycles
in different countries.
And basically, at the end of the book, he has a pretty, I think, important recommendation
to try and get the US to roughly 3% of GDP as our net deficit, net of all expense, including
interest expense.
So that's the recommendation to the administration.
I think it's pretty timely with the change in administration.
Anyway, great topics to talk through and really important book.
Awesome.
Well done.
And we are super delighted to have in the Red Throne,
Travis Cowan.
He is the co-founder and CEO of Cloud Kitchens.
He also worked in the cab business for a little bit,
co-founder and former CEO of Uber.
And yeah, we had a great interview
at the All In Summit last year,
and he's back up from his media hiatus.
He's been in the lab working on cloud kitchens.
How you doing, brother?
I'm doing really well.
I got to say, just like at the summit, Jason, it's an honor to be in the presence of such
a prominent Uber investor.
Absolutely.
I mean, finally somebody has recognized my contribution.
The greatness of J.Cal. Absolutely. I'll mention it three or four times. I mean, finally somebody has recognized my contribution.
The greatness of J. Cal.
Absolutely.
I'll mention it three or four times.
Appreciate it.
I'll give you the props.
You don't have to do it for yourself anymore.
Thank you.
I appreciate it.
Appreciate it. Give everybody a little overview of Cloud Kitchens and the business and how it's going
because people are obviously addicted to ordering food at home and it's quite a trend.
Yeah, the high level for it, the way to think about it is it's about the future of food.
What does the future of food look like?
You go, well, in 100 years, we'll start way out there,
in 100 years, you're gonna have very high quality food,
very low cost, that's incredibly convenient.
And there are gonna be machines that make it,
there are gonna be machines that get it to you, it's going to be exactly to your dietary preferences,
your food preferences, etc. It just comes to you and it's so inexpensive that it approaches or has
surpassed the cost of going to the grocery store. That's more of a today analogy. So you go,
100 years, of course, that's the thing. Nobody's going to be making food.
What about 20? What about 10? And so the company is real estate, software, and robotics. That's
all about the future food. And if you can get the quality there and you can get that cost down to
start approaching the cost of going to the grocery store, you
do to the kitchen what Uber did to the car. And that's the thing.
Travis, do you have a-
And it's like a grind. It's like a lot, you know, bits and atoms in the Uber world. This
is like five times more atoms per bit. This is like heavy duty industrial stuff, probably
more along the lines of like, you know,
where Elon goes and some of his companies, like they're super interesting tech, but you've got to
grind out those atoms. Do you see people actually cooking in the future or does it become a
centralized service? And is it optimized to people's health? And what do you think the
implications to the food supply are if your vision holds. How do you think about all those things? Look, people will cook in the future as a hobby. I make a joke at the office. I'm like,
I like horses. I love horses, but I don't ride a horse to work. And it's going to be a little
bit like that. Whereas you can cook. It's a soulful thing to do. It's just very human.
thing to do. It's just very human. But you know, it's late. You know, mom gets home late from the office, needs to get the kids, you know, a nutritious meal. She doesn't have to cook it now and she
doesn't, she won't have to cook it. And she won't have to go to McDonald's either. It will be high
quality and convenient and low cost all at the same time. And yes, dietary preference,
everything, because it'll be hyper personalized. Like the way the internet is in content,
plus, plus, plus in terms of your specific preferences for what you want.
I mean, you've got these computers rocking, or these robots rocking, I think in Philly somewhere
in the lab where they're making bowls.
Yeah.
I mean, we're out of the lab at this point. We have our machines. So we have a machine called
the Bowl Builder that basically makes different cuisine types with bowls. So think of like-
Like sweet greens, like what they-
Yeah.
We're not working with these brands specifically, but I'll just sort of, it's a good analogy.
I'll just sort of, it's a good analogy. Like think of Chipotle or Cava or Sweetgreen,
or you get the idea.
We created test brands that were like those things
and built the machine at the same time
as we were building an actual restaurant.
And we built that restaurant to prove that the machine works.
Then we have our customers now touring, checking out, we're
rolling out with five customers in April. They're using the machine and the way it's
going to go down is they will come into – and of course, we have the real estate, so we
have kitchens, tens of thousands of kitchens around the world. They will come into one
of our kitchens in a facility. It's a around the world, they will come into one of our kitchens
in a facility. It's a delivery only restaurant. They'll prep the food in the morning and then
they will leave. The machine will – if you will order online, door dash, you breathe, etc. They'll
order online the way they do. Build your own bowl exactly as you want and the bowl gets all the ingredients dispensed, hot or cold, sauce, etc., gets
littered. The bowl goes into a bag, the utensils go into the bag, the bag is sealed, and then
it comes out on a conveyor belt. And machine gets the bag, it goes to the front of the
facility, gets put into a locker.
That locker then is sitting there,
door dash, who reads driver comes,
waves their phone with an app in front of a camera
and it pops open the locker that has the food
that you're supposed to have.
That's so cool.
So like if you're a restaurateur,
you're the grind of the on-demand meal,
which is the restaurant world, goes away.
You basically prep and that's asynchronous from
when people order food. The machine does the final assembly or what's known as plating, essentially.
Do you think there's a service in the future where my physiology, I can share that with you,
with Cloud Kitchens, and you guys just can always be optimizing my food
based on what I know is good or bad for me. So first, what we do is we serve the restaurants.
So what happened, so Chamath, you'll be sharing your dietary preferences with
Uber Eats or DoorDash or Sweetgreen or somebody. Like our customer promise at our companies, we serve those
who serve others. Or put another way is infrastructure for better food. So we are either the AWS or the
Nvidia or whatever you want to call it, but for food. If that makes sense, we're behind the scenes,
we're the infrastructure. And so you'll give your preferences.
Right. It should be a brand like then Sweetg or whomever, Chipotle, that says,
hey guys, share with me an encrypted hash of your dietary restrictions, needs, whatever,
your lipid panel. And I'll customize this thing and then you enable that on the back.
Yeah, it's pretty close Chamath, right? You can do that. Authenticate your Apple Health.
That's really awesome. You just authenticate Apple health.
When these bowls come off the line,
and see how I talk, it's like an assembly line.
When these bowls come off the line,
on the label on the bowl
is how many grams of every ingredient is in it.
Plus a picture of what it was before we put the lid in,
that can be sent to the person
while the bowl's on its way via courier, right?
What do you think, Travis, about this whole Maha movement and just the food supply itself? So then,
how does that change? Do restaurants embrace more farm-to-table stuff?
I think, look, I think what we see with supply chains in a bunch of different industries,
it's just going to get super wired up. So right now, we're at the point of manufacturing,
but what happens – so you go,
okay, we're doing assembly. Then you go, okay, what about prep? Then you go further upstream and
you're like, what about supply chain like Cisco, US foods? And then you go further up and you're
like, well, how does, how does the mechanization occur on farms and in agriculture? And then how
does that all get wired up to serve the customer and sort of what they're looking for.
So like you really can know exactly what kind of wheat was put into that food, whether it
was organic for real or not, like what was the actual field that came from things like
this.
You can imagine like really getting tied about supply chain as it relates to dietary stuff.
And as it relates to like Maha, like hell to the yes. I mean, I ordered a couple different,
I went to the, I went to RFK Jr's website and they have like the, he has merch, he has
Maha merch. I have the green Maha merch hat. I should have worn it today. I'm all about
it.
Get the onesie.
That's amazing. The'm all about it. Did you get the onesie? Yeah, that's amazing.
The onesie was crazy.
Your bowl builder, Friedberg, you tried to do this, right?
Itza.
Itza and...
We had a bowl builder 10 years ago, or 2015, yeah, 2016.
Diego saw it, he actually visited it when we built it.
And we designed the system around a canister mechanism.
So all the food prep was done in a similar sort of like commissary model.
And then it was loaded in bulk and then put into little canisters.
And there were 30 slots in the canister dispenser.
And then the canister would move down the device, open up,
and you could assemble bowls with rice and beans and all sorts of stuff.
The whole thing was automated.
And we were in the process of building out our first automated store when I actually
took a medical leave of absence from Eatsa and ultimately the company did not get it
into production.
But we had great working demos and it was a very, yeah, I mean, it was just definitely
a no brainer that this was going to happen.
So you must love this.
You love this.
Yeah. And at the time we actually had, I'll tell you guys this, we actually had a term sheet with Chipotle.
This was nine years ago to actually put this into Chipotle stores.
And then we were in the early conversations with Sweetgreen at the time as well.
And obviously Jonathan and team have gone on to develop their own system.
But, you know, basically you can reduce so much of like QSR down to this bowl-based system and automate it
as Travis is doing.
So it's just a no-brainer.
And it's certainly necessary in a time
when there's either a labor shortage or labor price
inflation that's causing a real issue with the ability.
And yeah, this is the original automats in New York
in the early 20th century.
I love this.
Yeah, they had a commissary behind that wall,
and they made like plates of food.
You put in there, you put a quarter in,
you turn the knob, and you get your meal out.
It's super cool, right?
That's the classic artificial, artificial intelligence, right?
This is like the mechanical Turk thing.
I mean, look, here's the thing.
Here's a little nuance that's super interesting about automation
in QSR restaurants, is that they have an existing brick and mortar
that's built a certain way.
That layout is meant for humans
and for those humans to work in certain processes
in exact and very specific way.
Every square inch of that kitchen and that space is dialed.
When you go and put a machine like this in,
it changes the whole thing.
And so just to get going, you've got to do, When you go and put a machine like this in, it changes the whole thing.
And so just to get going, you've got to do, you've got to, like if you're to replace the
front line in Chipotle, you've got to take out that front line.
You got to demo it.
You got to put in a new machine.
That's the challenge that they all had.
And so now it's like a huge amount of capex.
My store's down for two to three months, and the economics start to not work.
And by the way, I still have to have humans
in that brick and mortar.
And so, you know, look, we have a different take.
We're in that delivery-only model.
So these are, it's true infrastructure
for making food behind the scenes for delivery
so you don't have these issues.
And of course, our setup, our infrastructure,
these kitchens are
designed for these kinds of machines to be in them and vice versa. We've designed the machine
to be in them. When we did this early at Eatsa, it was like food delivery was very early. We built
these Eatsa restaurants that were smaller footprint. We had an 800 square foot restaurant
that was doing 3 million a year in revenue and it had a handful of people working in it,
but we were putting about 800 people an hour
during the lunch rush through that restaurant,
ordering custom bowls.
This was by one market, right?
That's so, Jay.
One market, exactly.
And so, yeah.
By the way, did you guys notice that J. Cal was plugging
his product there in the background,
even though it has absolutely nothing to do
with what Travis was saying?
Oh, welcome back to the show.
Nothing's changed. Sax is here. No one else even noticed that.
I just heard this voice from above. It was the czar of AI and crypto.
I was like, wow. That's all. Sit back and listen.
The czar's back.
Sax, any anecdotes you want to share about life in DC?
How exciting it's been in the administration the first week?
It's been amazing. I mean, it's hard to believe it's only been a week, right?
So you're in the White House or that building next to it? Do you have an office?
That building. You mean the Treasury building?
I don't know. Somebody was talking about there's a building next to it or something.
I don't know.
I have an office in the old executive office building, otherwise known as the Eisenhower
building. And then I have a pass where I can just walk over to the West Wing if I want
To walk over to it
There's kind of a whole White House complex behind the gates that the West Wing is part of it and the Eisenhower building and there's a couple other
Buildings in that complex. It's really cool. It is really neat to
To show up for work at the White House. It's awesome. It's like being in a movie or something or a TV show.
It is really cool.
It's awesome.
Any interesting meetings you can talk about?
And I mean, I know we are here today to talk about DeepSeek,
but any interesting meetings or anecdotes from just the vibes
and walking around?
What's the coffee like?
Is there like a commissary?
You run into anybody interesting?
There is a commissary actually in the White House called the Navy Mess.
And I think they're just opening up for business now.
That is one of the cooler things you could do is you can take people to lunch at the
Navy Mess.
Oh, look forward to it.
J. Cal just invited himself.
Look forward to it.
I look forward to taking Chamath and Freberg there.
I'll wear my MAGA hat.
All right.
Well, let's get started.
You're here because we have a very specific-
He's here because the world is ending, Jason.
The Western world is ending.
Okay.
The Western world is ending and David Sachs is going to save it, but we had a little bit
of a freak out the last week regarding this deep seek.
If you don't know, that's a Chinese AI startup, they released a new language model.
It's called R1.
And it's on par basically with some of the best models in production in the West, like
OpenAI's O1 model.
But they claim, and listen, you can trust claims coming out of China, you know, for
what it's worth.
They claim to have done this all for $6 million, only 2000 GPUs. For comparison, OpenAI spent reportedly 80, 100 million to train GPT-4, which you're
all using now. And Sam claims they're going to spend a billion dollars training GPT-5.
And so that's about 7% of the cost of GPT-4. Obviously there are export restrictions on Nvidia H100s to China. So there's a big debate
as to if they actually have H1s or not. And Monday was a bloodbath in the stock market. Nvidia had
the worst day in the history of the stock market in terms of total dollar amount of market cap lost.
It was down 17%, which is $600 billion. TSMC was down,
ARM was down, Broadcom was down. So I guess everybody's asking the question, how did they
do this? Did they do it? And then there's a bunch of debate on whether they stole, which
is kind of rich coming from OpenAI, which got caught red handed stealing everybody else's
content. And now they're crying foul that the Chinese soul or trained
did what's called distillation of their model in order to build theirs. Sax, obviously you are the
czar of AI. I'm curious what your take on all this is and thanks for coming. Well, I think one of the
really cool things about this job is just that when something like this happens, I get to kind of talk to everyone and everyone wants to talk.
And I feel like I've talked to maybe not everyone and like all the top people in AI, but it
feels like most of them.
And there's definitely a lot of takes all over the map on DeepSeq, but I feel like I've
started to put together a synthesis based on hearing from the top people in the
field. It was a bit of a freak out. I mean, it's rare that a model release is going to
be a global news story or cause a trillion dollars of market cap decline in one day.
And so it is interesting to think about like, why was this such a potent news story? And
I think it's because there's two things about that company that are different. One is that
obviously it's a Chinese company rather than an American company.
And so you have the whole China versus US competition.
And then the other is it's an open source company or at least it open source the R1
model.
And so you've kind of got the whole open source versus closed source debate.
And if you take either one of those things out, it probably wouldn't have been such a big story. But I think the synthesis of these things got a lot of people's
attention. A huge part of TikTok's audience, for example, is international. Some of them
like the idea that the US may not win the AI race, that the US is kind of getting a
comeuppance here. And I think that fueled some of the early attention on TikTok. Similarly,
there's a lot of people who are rooting for
open source or they have animosity towards open AI. And
so they were kind of rooting for this idea that, oh, there's this
open source model that's going to give away what open AI has
done at one 20th the cost. So I think all of these things
provided fuel for the story. Now I think the question is, okay,
what should we make of this? I mean, I think there are things that are true about the story
and then things that are not true or should be debunked. I think that let's call it true
thing here is that if you had said to people a few weeks ago that the second company to
that the second company to release a reasoning model
along lines of O1 would be a Chinese company. I think people would have been surprised by that.
So I think there was a surprise.
And just to kind of back up for people,
there's two major kinds of AI models now.
There's kind of the base LLM model like Chat D40,
or the DeepSeq equivalent was V3,
which they launched a month ago.
And that's basically like
a smart PhD. You ask a question, it gives you an answer. Then there's the new reasoning
models which are based on reinforcement learning, sort of a separate process as opposed to pre-training.
O1 was the first model released along those lines. You can think of a reasoning model
as like a smart PhD who
doesn't give you a snap answer but actually goes off and does the work you
can give it a much more complicated question and it'll break that
complicated problem into a subset of smaller problems and then it'll go step
by step to solve the problem and that's that's called chain of thought right and
so the new generation of agents that are coming are based on this type of idea of chain
of thought that an AI model can sequentially perform tasks, figure out much more complicated
problems.
So OpenAI was the first to release this type of reasoning model.
Google has a similar model they're working on called Gemini 2.0 Flash Thinking.
They've released kind of an early prototype of this called Deep Research 1.5.
Anthropic has something,
but I don't think they've released it yet.
So other companies have similar models to O1
either in the works or in some sort of private beta,
but DeepSeq was really the next one after OpenAI
to release the full public version of it.
And moreover, they open sourced it.
And so this created a pretty big splash.
And I think it was legitimately surprising to people that the next big company to put
out a RISD model like this would be a Chinese company.
And moreover, that they would open source it, give it away for free.
And I think the API access is something like one twentieth the cost.
So all of these things really did drive the new cycle.
And I think for good reason, because I think that if you had asked most people in the industry
a few weeks ago, how far behind is China on AI models, they would say six to 12 months.
And now I think they might say something more
like three to six months, right? Because O1 was released about four months ago, and R1
is comparable to that. So I think it's definitely moved up people's timeframes for how close
China is on AI. Now, let's take the... We should take the claim that they only did this for $6 million.
On this one, I'm with Palmer Luckey and Brad Gerstner and others, and I think this has been
pretty much corroborated by everyone I've talked to that that number should be debunked.
First of all, it's very hard to validate a claim about how much money went into
the training of this model. It's not something that we can empirically discover. But even if you accepted it at face
value, that $6 million was for the final training run. So when the media is hyping up these
stories saying that this Chinese company did it for $6 million and these dumb American
companies did it for a billion, it's not an apples to apples comparison.
If you were to make the apples to apples comparison, you would need to compare the final training
run cost by DeepSeek to that of OpenAI or Anthropic.
What the founder of Anthropic said and what I think Brad has said, being an investor in
OpenAI and having talked to them is that
the final training run cost was more in the
tens of millions of dollars
About nine or ten months ago, and so you know, it's not six million versus a billion
Okay, it's a billion dollar number might include all the hardware
They bought the years of putting into it a holistic number as opposed to the training number.
Yeah, it's not fair to compare, let's call it a soup to nuts number, a fully loaded number
by American AI companies to the final training run by the Chinese company.
But real quick, Sachs, you've got an open source model and the white paper they put
out there is very specific about what they
did to make it and the results they got out of it. I don't think they give the training
data, but you could start to stress test what they've already put out there and see if you
can do it cheap, essentially.
Like I said, I think it is hard to validate the number. Let's just assume that we give
them credit for the 6 million number,
my point is less that they couldn't have done it,
but just that we need to be comparing likes to likes.
So if, for example, you're gonna look at
the fully loaded cost of what it took DeepSeek
to get to this point,
then you would need to look at
what has been the R&D cost to date
of all the models and all the experiments
and all the training runs they've done, right?
And the compute cluster that they surely have.
So Dylan Patel, who's leading semiconductor analyst,
has estimated that DeepSeq has about 50,000 hoppers.
And specifically, he said they have about 10,000 H100s,
they have 10,000 H800s and 30,000 H20s.
Now, the cost of that-
Is they, Zach, sorry, is they DeepSeek
or it's DeepSeek plus the hedge fund?
DeepSeek plus the hedge fund.
But it's the same founder, right?
And by the way, that doesn't mean they did anything illegal,
right?
Because the H100s were banned under export controls in 2022.
Then they did the H800s in 2023 under export controls in 2022.
Then they did the H800s in 2023.
But this founder was very farsighted. He was very ahead of the curve.
And he was through his hedge fund,
he was using AI to basically do algorithmic trading.
So he bought these chips a while ago. In any event,
you add up the cost of a compute cluster with 50,000
plus hoppers and it's going to be over a
billion dollars.
So this idea that you've got this scrappy company that did it for only 6 million, just
not true.
They have a substantial compute cluster that they use to train their models.
And frankly, that doesn't count any chips that they might have beyond the 50,000, you know,
that they might have obtained in violation of export restrictions that obviously they're not
going to admit to. And we just don't know. We don't really know the full extent of what they have.
So I just think it's like worth pointing that out that I think that part of the story got overhyped.
It's hard to know what's fact and what's fiction.
Everybody who's on the outside guessing
has their own incentive, right?
So if you're a semiconductor analyst that effectively
is massively bullish in video, you
want it to be true that it wasn't possible to train
on $6 million. Obviously, if you're the person
that makes an alternative that's that disruptive, you want it to be true that it was trained on
$6 million. All of that I think is all speculation. The thing that struck me was how different their
approach was and TK just mentioned this, but if you dig into not just the original
white paper of DeepSeq, but they've also published some subsequent papers that have refined some of the details. I do think that this is a case and Sachs, you can tell me if you disagree,
but this is a case where necessity was the mother of invention. So I'll give you two examples where
I just read these things and I was like, man, these guys are like really clever. The first is,
as you said, let's put in a pin
on whether they distilled O1,
which we can talk about in a second.
But at the end of the day, these guys were like,
well, how am I gonna do this reinforcement learning thing?
They invented a totally different algorithm.
There was the orthodoxy, right?
This thing called PPO that everybody used.
And they were like, no, we're gonna use something else
called, I think it's called GRPO or something.
It uses a lot less computer memory
and it's highly performant.
So maybe they were constrained, Saks, practically speaking,
by some amount of compute that caused them to find this,
which you may not have found if you had just a total
surplus of compute availability.
And then the second thing that was crazy is everybody is used to building models and
compiling through CUDA, which is NVIDIA's proprietary language, which I've said for a
couple of times is their biggest moat, but it's also the biggest threat vector for lock-in.
And these guys worked totally around CUDA and they did something called PTX, which goes right to the bare metal,
and it's controllable, and it's effectively
like writing assembly.
Now, the only reason I'm bringing these up is we,
meaning the West, with all the money that we've had,
didn't come up with these ideas.
And I think part of why we didn't come up
is not that we're not smart enough to do it,
but we weren't forced to because the constraints didn't exist. And so I just wonder how we make sure we learn this
principle. Meaning when the AI company wakes up and rolls out of bed and some VC gives them $200
million, maybe that's not the right answer for a Series A or a Seed. And maybe the right answer
is $2 million so that they
do these deep seek like innovations. Constraint makes for great art. What do you think, Friedberg, when you're looking at this?
Well, I think it also enables a new class of investment opportunity.
Given the low cost and the speed, it really highlights that maybe the opportunity to create
value doesn't really sit at that level in the value chain, but further upstream. And the speed, it really highlights that maybe the opportunity to create value
doesn't really sit at that level in the value chain, but further upstream.
Bology made a comment on Twitter today that was pretty funny or I think we're
Flappers about the rapper.
He's like, turns out the rapper may be the boat, the money, the boat, which is
true at the end of the day, if model performance continues to improve, get
cheaper, and it's so competitive
that it commoditizes much faster than anyone even thought,
then the value's gonna be created somewhere else
in the value chain.
Maybe it's not the wrapper.
Maybe it's with the user.
And maybe by the way, here's an important point,
maybe it's further in the economy.
When electricity production took off in the United States,
it's not like the companies are making a lot of money that are making all the electricity
It's the rest of the economy that accrues a lot of the value
Well, you're about to see a big test of this because if open AI raises 40 billion at 340 billion
That just hit the wire
The underwriting logic at 340 billion exactly what you just said Freeberg
It is the wrapper meaning chat GPT is the next killer app. It's
getting to a billion plus Mao, hundreds of millions of Dow.
It's competing for consumer usage. That's the model. That's the model is like consumer usage.
Which puts them on a collision course with Meta. It's the only company that could really impact
that because the only company right now that has billions of eyeballs of DAUs per day,
and who, and by the way,
Zuck said this in his earnings release.
He's like, there's only going to be one company
that brings AI to a billion plus people,
and it will be us.
Some version of that quote
is in his earnings release yesterday.
And then Microsoft showed weakness in their cloud.
And then Microsoft's down 6% today.
I think it's a window for OpenAI to say, we're going to go up against Meta.
This is it.
We're going to be the players.
Everyone's ignoring Google at this time.
What do you guys think is happening right now between OpenAI and Microsoft?
If it's true that this distillation thing actually happened, well, there's only one
place where you could have distilled the O1 model,
but it's on Azure. So what the hell is going on over there?
Well, and there are one is supported on- Explain distillation real quick.
Yeah. So when you have a big large parameter model, the way that you get to a smaller,
more usable model along the lines of what Sachs mentioned is through this process
called distillation, where the big model feeds the little model.
So the little model is asking questions of the big model and you take the answers and you refine.
By the way, you can see this, Nick, I sent you a clip. You guys can see this.
I mean, there's clearly distillation happening. Nick,
can you show the clip of the deep-seq run where it shows the China answer and then deletes it?
What was Winston's job in
1984, right? And it sort of starts to go through this whole summary. And then the person says,
are there any actual states that currently do that? Hold on, here it goes. It says,
North Korea. Wait, it goes China. And then wait, watch this, boom. So the reason why this is
happening is like you're seeing this chain of thought, you're seeing the several layers, and then it's catching it after the fact. So we know that this is distilled from some
other model. And my only point there, it's the little tongue in cheek is right now, when you go
and use OpenAI, you're using it sitting in an Azure instance somewhere, right? So this is Microsoft's
cloud infrastructure that runs it. So it begs the question, it's not that it's O1's fault or OpenAI's fault that this distillation happened. And I'm not
trying to assign blame, but typically if this were to happen, you'd look to your cloud provider
and say, how are you letting this happen? And I don't think anybody's had a good answer for that.
Well, and the cloud provider is hosting R1 now, so they're literally undercutting ChatGPT
and OpenAI at the same time.
Just to clean that up, they're hosting their own copy of it, right?
Because R1's been a resource.
When you say they, who are you referring to, Sax?
Microsoft.
Yeah, Microsoft is hosting their version of R1, which means they are actively subverting
their partner, OpenAI, and pushing people to a cheaper
model.
Well, whatever.
I mean, look, Amazon's going to host their own version of R1.
Grok has a version of R1 that they're hosting.
Yeah, we have one.
Cerebrist just rolled out.
It's open source now.
You guys have a buddy who has R1 on his laptop, you know?
Yeah, exactly.
Yeah, but if it was stolen, and the IP was stolen, as Sam is claiming, that would be like, you'd
think he'd be able to call up SockD and say, hey, can you not put the stolen IP on your
server and promote it to everybody at a lower cost?
It just shows Microsoft has no loyalty to OpenAI.
Yeah.
But you'd think they would have loyalty.
They have no loyalty.
But guys, what it would take to distill O1, like brute force, it wouldn't be like, oh, geez, I can't believe it was distilled.
It would be like such a massive number of calls against an API or against something.
Something.
That it wouldn't be unnoticed.
Oh, they did actually came out and said they blocked some suspicious activity recently.
No, no, but they're always doing that. That's constant.
You're always doing that. That's like the old school. Go ahead, Zach.
Let me address the distillation point. I mentioned this a few days ago on Fox News that I thought it
was likely or possible that distillation had occurred and there was some evidence for this.
It became like a news story. I didn't even realize that saying that would be news because it's kind of an open secret in Silicon Valley.
Everyone I talked to. They're doing some level of distillation.
Yeah. Because you need to test your model against theirs anyways.
Yeah. And every single person I've talked to basically has agreed that there was some distillation
here from open AI. Now, that doesn't mean it was the only thing going on here. I
mean to be sure the DeepSeq team is very smart and there were some innovations
but also there was some distillation and really this wasn't even a fresh news
story I think from the point of view of Silicon Valley because a month ago we
had a press cycle in Silicon Valley when DeepSeek's V3 model came out
that DeepSeek V3 was self-identifying as ChatGPT.
When you would ask it, who are you?
Like, what model are you?
Five out of eight times, V3 would tell you
that it was ChatGPT 4.
And there's lots of videos and examples of this online
that have been posted, right?
The point is that we knew a month ago that V3 had been trained on a substantial amount
of chat GPT output, obviously, because V3 was self-identifying as chat GPT.
And there's two ways that that could have happened.
So the, let's call it innocent explanation, is that DeepSeek had crawled the web and found
lots of published output from ChatGPT
and then trained on that.
And that wouldn't be a violation of OpenAI's terms of service
or their IP.
Or the other explanation would be
that they used the API from OpenAI and basically, you know.
Went to town.
Yeah, went to town.
And there's no way, I think, based on what we know to prove that one way or another,
but I know what most people think happened.
And at the end of the day, OpenAI can probably figure it out.
And they've indicated that they think there was some improper distillation here.
But in the Financial Times, it says, OpenAI says it has found evidence that Chinese artificial
intelligence startup DC used the US company's proprietary models to train its own open source competitor.
Right, that's what I'm referring to.
So they say-
They've been very clear about this.
By the way, you have to be sympathetic, I think, to OpenAI in this, because if you're building a
startup, you're trying to raise money. We've all gone through this cycle, guys, where it's like,
there's momentum. We celebrate internally the momentum. That's what gets you the energy to
push your team even further and harder. And then all of a sudden it turns out that some portion
of that, like Travis said it well, like there's probably a chart inside of OpenAI offices where
you're showing how many times these APIs are getting hit, right? You know, how many times
these endpoints are getting hit. It all looks positive and then you realize that some portion of it was
actually bad and trying to undercut your value. It's a hard pill to swallow and then you have to
course correct very quickly. You have to lock down. This is one area where-
Security.
Exactly. We have not talked about this. You have to lock these models down. Now you have to lock
the endpoints down. Look, in the Biden administration, if this had happened,
the first conversation would have been, we need to KYC the people that use these models. And it's
like, what are you talking about? We don't KYC the cloud. If you're trying to use like an EC2
endpoint or an S3 bucket, you don't have to all of a sudden prove who you are. You just use a
credit card and go. That's the whole point of why proliferation can happen so quickly. But if we
take the wrong takeaways from
this period, there's going to be a bunch of people that will clamor to lock these folks down and make
innovation go much slower. I think that that would be bad. Here's the other side, and totally agree,
Chamath, but here's the other side. You go through the white paper, you see what it is they did,
what they innovated on, the science behind it, the thoroughness.
And you're like, these guys are bad ass. It does not feel or sound like somebody who took
something just when you get through it. It could be that OpenAI wrote the white paper for them,
just putting it out there. But it's real innovative.
I agree with that.
Real innovation, strong tech. You're like, this is legit. I agree with that. Real innovation, strong tech. You're like, this is legit.
I agree with that.
But in that paper, they're very easy about where the data is coming from. They're fairly
transparent about everything else they did, but they're not really clear about the data.
And specifically, they say that to get from V3, which is the base model, to R1, which
is the reasoning model, they had about 800,000 samples of reasoning.
They were quite unclear about where those reasoning samples came from.
By the way, it is remarkable that you can get from a base model to an R1 with just 800,000
samples.
But this is the problem.
Like, we, meaning the Western AI
community, we've been trudging around on this path where we've been very – we had a very
orthodox approach. The only way you can do reinforcement learning is through PPO. Okay,
but is that true? It turns out that if you're a really smart team that has no other choice,
you move away and you invent your way out of it.
And so we have to get that example too.
I think it's technically brilliant
some of the things they've done,
but they also use constraint as a very much a feature,
not a bug.
And the Western AI economy has been the opposite so far.
I think the best part of this is the fact that
Sam Waltman was supposed to be doing open source.
He made it a closed source company.
He stole everybody's data.
Every got caught red handed.
He's being sued by the New York Times for all that.
And now the Chinese have come and open sourced all the stuff he stole.
And he's got a real competitor on the original mission of what OpenAI was supposed to do.
So I have zero sympathy for him or the team over there.
I'm glad that this is all going open source. It should have been open source and it's better for humanity.
And the fact that the Chinese did it to Sam Waltman has come up and for him stealing everybody
else's content. That's my point.
Okay, so you have it.
But I don't have strong opinions on it. It's hilarious. Does nobody see the irony in this?
He was supposed to be doing open source.
Well, it is interesting because J. Cal, I will say the models are closed. You're right. There
was there's the lawsuit with Scarlett Johansson for stealing her voice even when she said
no. There's a real question. And people have asked New York Times. And then there's now
the question about YouTube data being used to train the video models. So there's a lot
of being on their on their heels a little bit. So I definitely I definitely see your
point stealing. there's a lot of being on their heels a little bit. So I definitely see your point. Stealing.
I think all the pressure right now I think is on Meta because I think Meta has to show up with
the next iteration of llama that beats and exceeds Gemini, that exceeds R1. And I think that that is
going to be crucial for us to have a counterweight to
whatever China is going to put out after this. But I mean, Chamath, it's open source. Does it not kind
of- So this is my point. Embrace and extend. Embrace and extend.
Meta has to embrace and extend everything that these guys have shown, meaning like Meta is buying
tens of thousands of NVIDIA GPUs, great. But what did this show?
This shows that actually CUDA,
high level language is in general.
I think we've all known that they suck, okay?
And so we've all been going through it,
thinking that it's like the right thing to do.
DeepSeq throws it out the window.
They use something called PTX.
What does Meta do is critical now to understand.
They need to embrace this stuff.
And this is where I think, again, apologies to the Nvidia bulls, but it's going to create a
more heterogeneous environment. And the reason is because there's too much money and risk on the
line to go through a single point of failure. A chip, a high level framework to get to that chip,
that's nuts. So I think like that, kind of like
emperor has no clothes moment is upon us.
Well, let me ask you another question.
Let's assume that we start the world of AI today.
So there's no legacy of the last three years.
And you wake up today and there's this open source model
that's 670 billion parameters.
You can run it on your desktop computer.
It's completely available.
Everything's completely transparent.
And I ask you the question,
forget about all the big companies that are involved
in everyone's strategy historically,
what's the model today to build value here?
Where do you build equity value as a business?
If you're gonna start a company,
if you're gonna invest as an investor, where do you go?
The first is you have to build a shim.
And I think the reason why a shim is really critical is that
there's so much entropy at the model level. What this should show you is you can't pick any model.
And the problem is that the people that manipulate these models, the machine learning engineers and
whatnot, they become too oriented to understanding how to get output of high quality using one thing,
meaning it shouldn't have been the case that we
have engineers that can only use Sonnet. That's the anthropic model. It shouldn't be the case
that people can only use OpenAI or people can only use Llama. Right now, that is kind of what we have.
You don't have the flexibility to hot swap as models change. So if you're starting a company
today, the first technical problem I would want
to solve for is that. Because tomorrow, if it's R2 or Alibaba's model or Llama, I would want to
be able to rip it out and put it back in and have everything work. And right now we can't do that.
The answer to your question is the application layer, because this is all going to become storage. It's like YouTube being built on top of storage or Uber being
built on top of GPS. All these innovations are being commoditized. And this one is happening
faster than all the rest. Do you want to be in the storage business or you want to be
in the YouTube business? Do you want to be in the Uber business or do you want to be
in the GPS chip business? I mean, they're both decent businesses, but Gavin Baker came
on this podcast and said this is the fastest deprecating asset in the world was a large language model.
He's been proven right. They're not worth anything. They're all going to be open source.
They're all going to be commoditized. And that's for the best of humanity. And now we're going to
be on the application level, the hardware level with robots. And I think that's where the opportunity
is. Travis, what do you do? What company do you start today? If you start a company today,
given where the world is at, given the open source models, like what do you start today? If you start a company today, given where the world is at, given the open source
models, what do you do? Oh, I'm getting so excited. Look, I think the first degree out is,
is there a wrapper company? Okay, so of course, maybe those companies already exist.
And then is there a tools company?
So in a funny way, even though Facebook could be the wrapper,
they have a tools business that DeepSeek is basically
challenging going full open source
and putting something out there that's really good.
And what has to happen is Meta has to decide,
we are going to embrace and extend this.
We're going to make sure that all the developers come to us,
that all the cool applications get built here.
So I think it's like there's a tools business,
and then there's the wrapper business.
And then look, when AI, here's the one thing on the Nvidia
thing that I would counter with a little bit of what's
been said here, is when AI gets cheap,
you know what's going to happen, guys? There's going to be a lot more AI, right?
I don't think, I think the price elasticity on this one is actually positive. So as the
price goes down, the revenue usage, everything's going to go up. This is the history of tech
forever since like Bill Gates said, I don't know what to do with more than 64 kilobytes
of memory. Yeah.
The question is, did we...
Cheap oil in the United States drove the industrial revolution, right? When we started discovering
oil, suddenly we were able to build factories and make stuff that we never imagined possible.
And so then you're like, okay, AI is like, it's going to get cheap, it's going to be oil,
but it's also going to be specialized for different tasks.
Like you're gonna start getting into nuances
of like what does the investor AI look like?
What does the autonomous car AI look like?
What does the Google search,
I'm trying to figure some shit out, AI look like?
The lawyer, the accountant, the pilot.
So you could go vertical and siloed, siloed air quotes, which you understand what I'm saying.
Absolutely. Yeah. So there's a thing called Jevons Paradox,
which kind of speaks to this concept. Satya actually tweeted about it, which is the,
it's an economic concept where as the cost of a particular use goes down, the aggregate demand
for all consumption of that thing goes up.
So the basic idea is that as the price of AI gets cheaper and cheaper, we're
going to want to use more and more of it. So you might actually get more spending
on it in the aggregate. Because more and more applications will
become cost efficient. Economically feasible. Exactly. That is, I think, a powerful argument for why
companies are going to want to
continue to innovate on frontier models. You guys are taking a very strong point of view that open
source is definitely going to win, that the leading model companies are all going to get
commoditized, and therefore, there'll be no return on capital and basically continue to innovate on
the frontier. I'm not sure that's true. For one thing, the R1 model is basically
comparable to O1, which OpenAI released four months ago
and was training on internally, call it nine or 10 months ago.
So OpenAI is on O3 now.
Its frontier is ahead of where R1 is.
Anthropic and Google, I think, have things in the work,
and even Meta, that may be ahead of where R1 is.
So I think R1 or DeepSeek's done a good job
being a fast follower here.
It's not clear that this is the frontier.
And those frontier model companies now,
having seen what might have happened with distillation have a pretty strong incentive to
make sure that doesn't happen again. And they're going to be
taking countermeasures. I mean, there's a question of like, how
much you can do to stop it. But I think it's a little premature
to conclude that there's no reward for being at the
frontier.
Anybody have any other questions for Sachs before we drop him off
to go back to serving the American Sax before we drop him off to go back to serving the
American people before we drop him off?
One final point on the whole open source versus closed source.
Look, I'm not going to take sides in that, but I think that it's a mistake to just view
what happened here as, oh, it's just like plucky upstart that's like doing the community
a huge service out of the goodness of its heart.
You know, it's basically open sourcing all this stuff. Oh, they stole it. They stole it. It's a Chinese. Come on.
You still have this huge geopolitical aspect to it, right? And Deepseeker is a Chinese company,
and they're trying to catch up. And so if you're behind, you're trying to catch up,
then open source is a strategy that actually really makes sense for you.
Absolutely.
And, you know, they're trying to basically undercut the leading American companies.
And I don't think they did it with $6 million. I mean, they have massive resources behind them.
So I think some of the pro deep-seek vibes, I think, are they're a little bit naive,
you know, in Silicon Valley. It's like-
That's only the people who worked for Sam previously and quit who feel that way.
I think there's a lot of like support for Deep Sea in Silicon Valley because again,
people think that they're doing this huge service for the community.
And I think it's a little bit more self-interested than that.
It could be both, right?
I mean, there is a theory that they're trying to undercut and neuter the lead. And at the same time, there's
about a bunch of people who believe in open source and
nobody should control this. And certainly not Sam Altman should
be the person who controls it. So two things could be true at
the same time. David, thank you so much for coming on. We
appreciate it. And thank you for coming on your podcast. Thank
you, David. I know that this is now we're going to talk about a
bunch of other crazy stuff. Yes, a scholar and a gentleman, David.
Yes, thank you. All right. Thanks to David Sachs for coming in. And, you know, I guess,
let's open up the aperture here and talk a little bit about relations with China. We're obviously in
a bit of a Cold War with them. We have tariffs. We have Taiwan. And then we have the sort of trade war going on here with exports
of H100s. Where do we want to start, gentlemen? And, you know, Travis, you've got some deep,
you're one of probably five American entrepreneurs who ran an at scale business with Uber and
the D-D relationship in China. So you have a unique position of understanding business
in this along with maybe Tim Cook and Elon are the only other two people who've really had an at scale business
there.
Maybe Disney, they have Disneyland there.
What's your take on the relationship and what's going on here?
How's China going to operate differently than the US, Travis, from your experience, your
point of view?
Tell us a little bit about the culture and business ethics in China, particularly as
it relates to AI.
Okay. So, look, I had this thing. I'm going back almost 10 years here, Uber day, we're
running Uber China. And I mean, I cannot, there's no way I could express the frenetic intensity of copying that they would do on
everything that we would roll out in China. And it was so epically intense that I basically had a
massive amount of respect for their ability to copy what we did.
I just couldn't believe it.
We would do real hard work, make it, we'd dial it
and it would be epic and it would be awesome.
We'd roll it out and then like two weeks later, boom.
They've got it.
A week later, boom, they've got it.
And of course I use that to drive our team.
And there's so many great stories. I mean, we had like 400 Chinese nationals in Silicon Valley.
At our offices in San Francisco, we
had a whole floor for the China Growth Team,
and it was primarily Chinese nationals.
We had billboards on the 101 in Silicon Valley in Chinese, Uber billboards to join our
team in Chinese to serve the homeland, right? It was like an all out war. It was really epic.
It was epic. And by the way, when you went to that floor in our office, you were in China.
Like they rolled China style. Like the desks were literally smaller.
Like the density of the space was, it was China. Okay. So, but what happens is when you get really,
really good at copying and that time gets tighter and tighter and tighter and tighter and tighter,
you eventually run out of things to copy. And then it flips.
To creativity.
To creativity and innovation.
Now at the beginning, you know,
it's sort of all over the place.
Like the kind of innovation when it was new was like,
what?
You know, you're like, really?
But as they exercise that muscle,
it gets better and better and better.
So if you want to know about the future of food,
like online food delivery, you don't go to New York City. You go to Shanghai.
What's an example of something really innovative they're doing?
Doesn't Meituan do drone delivery and stuff?
Like here's an example. If you went to offices like let's say Shanghai, Beijing, any of the major cities, Hongzhou,
et cetera, the office buildings have hundreds of lockers around their perimeter.
So that everything that you get, whether it be food or anything else, but especially food,
is just the couriers drop them off in these lockers at the office buildings, and then there are a whole
other set of people that are sort of like inter-office- Runners.
Runners that then bring it to your office. As an example, and when you see it, you're like,
and it's epically efficient. And they're taking advantage of their economics on labor and things like this. It wouldn't exactly work that way here, but a lot of the innovation you will see coming
out in Uber Eats or DoorDash, the stuff that's coming out now is stuff that existed three
years ago, four years ago in China, maybe longer.
So eventually you cross that threshold of copying and you are innovating and then you're
leading.
And I think we see that in a whole bunch of different places.
Yeah, here's a look at these smart lockers that you can see.
You're just available for sale when you go online.
But yeah, these things are crazy.
And you've experimented with those as well.
Didn't you have a commissary concept in DTLA?
Well, look, okay, so we got a couple of things. So we have in every one of our facilities,
and we've got hundreds of them, we'll have lockers there. So the courier then waves their phone in
front of a camera, the right locker pops open, they get the food from there and they go.
The courier pickup is asynchronous from production of food. You don't have lines anymore. There's no more
lines, which then speeds up delivery, shortens the amount of time, shortens, reduces how much
money you spend on couriers. And we've got a whole other thing. This doesn't work in,
it probably wouldn't work in China because, well, for a lot of reasons, but let me explain what it is. It's called picnic
where if you are in an office building, you order food, you go to a website, you order whatever it
is from a hundred different restaurants. Those restaurants happen to be in my facilities and
there'll be one courier that goes to one of our facilities and picks up 50 orders at a time,
brings it to an office, puts it, there's a shelf on every
floor, you get notified when your food arrives and it arrives the same time every day, and you just
go to the shelf, get it on your floor and dip it right back into your meeting. Saving people time
at the office, giving them selection on food, especially in food deserts, but even going,
like there's a Sweet Green right down there in my current, in my office right now, I can say 20 minutes by just using our own service
versus doing that. And you get it the same price because the courier economics, the courier
is delivering 50 orders at a time. So courier costs go basically to zero.
What do we think of the export controls here? Should we Chamath be maybe banning more H100s or other chipsets
going there or is that futile? I don't know the answer to that. And I think that's,
I think Saxon President Trump will make a good decision, but here's the curious case of the
export controls. Nick, I sent you a couple of tweets if you want to just bring this up. So the first thing that
people are claiming is that DeepSeq is getting access to a bunch of NVIDIA GPUs using Singapore
as a backdoor. So essentially you create a Singaporean shell company, you place an order
with NVIDIA, NVIDIA fulfills that into Singapore and then the chips go someplace. And so there's a bunch of
examples where people are saying that you're talking about up to a quarter of all Nvidia
revenue goes into Singapore. And the speculation right now is that 100% of those then go into
China, which is an enormous claim because that's a huge amount
of Nvidia's revenue. Now, the interesting thing is if you actually try to understand,
well, maybe that's not true and maybe it's sitting inside of Singapore,
this is where that kind of unravels. So just to be clear, Singapore is about 250 or 260 square
miles. It's like a small, small place.
Also the TikTok headquarters.
I tried to find out how many data centers are in Singapore and it's about 100.
So you would think that, okay, well, what does that mean? 100 could mean anything.
But then you look at the energy and they publish that. All of those 100 data centers consume
And all of those hundred data centers consume about 876 megawatts. So these are small data centers, right? And the entire industry is like a one and a half, two billion dollar revenue
business. So I do think that Sachs and the administration are going to have to dig into
this and figure out what their opinion should be. But there is clearly a ton of these chips going into Singapore. I don't think anybody
knows where they end up. And the question is, what does America think about that? And why did we
implement these export controls in the first place? And if there's a simple back door,
how do you want to react? If the US finds a path, I mean, let's talk about what happened with
sanctions in Russia and other prior kind
of sanctioning efforts around the world.
But as you kind of close the floodgates and close access, the buyer or the receiver of
those goods or that capital are going to look elsewhere.
They're going to look to create a market somewhere else.
And so if we do cut off access to Nvidia chips,
we do cut off access to US exports,
are we not kind of recognizing
that the second order effect of that
is that China will take IP that they've stolen,
copies that they've made to Travis's point,
and develop and build out their own fabs,
and they'll find ways to copy the ASML technology.
And at the end of the day, there's a lot to put together. And I know it's deeply technically complex, but if ever
there were a group of people in the history of human civilization to pull it off, it's probably
the modern Chinese to be able to say, let's go build our own infrastructure.
It's worse than that. This is a great point, but it's worse than that. The models today are capable of designing chips for you that don't
rely on the most complicated technologies that ASML creates. I mean look, one of the luckiest
things that happened to Grok was we designed our chip at 14 nanometer which is effectively in the
spectrum of technology like VHS and beta. So we chose a simple, simple technology stack to build towards. The latest
cutting edge chips at like 2 nanometer that use these complicated ASML machines, it's not clear
that the yield is actually that good. So why would you spend all that money? And if China is
forced to engineer its way around it, yeah, Freeberg, the answer to your question is they'll
use these models to design chips that
can be manufactured in simple ways, and they'll make simple stuff. So this is not sure it solves
the problem is my point. Well, it doesn't. And this is why I think like, it doesn't solve the real
problem, which is how do we incentivize people in America to really out engineer and out innovate
competition, or AI ushers in an era of extraordinary abundance and that abundance ultimately
reduces the drive for conflict and things are better off.
Or the other version as well is that China could just bear the cost as a central authority of
building an incredibly great model, right? And they will spend all the money and then they
will tell the Chinese companies, you can distill from
this model for free, because we have a golden vote and a seat on
your board anyways, which is effectively de facto what happens
if you get big enough in China. So there's that possibility as
well, where one central authority bears the capex of
creating something that then everybody else can can draft
off of.
And let's talk a little bit about OpenAI.
They're in Washington asking for money now.
Is that the concept now?
Is that our government should back?
The rumor today was they're raising $40 billion
at a $340 billion pre-money with MASA potentially being the lead.
I would love Travis's read on this,
because Travis has taken large money from MASA in the past and has been through this but
How does he think about and make this decision?
Obviously we all know and I mentioned you guys the meeting I had with him last summer
Where he basically kicked me out of the room because my company is not generative AI
Someone said you should go meet with masa
So I'm like sure I'll sit down with him and start talking and he just like looked at me and he's like
This is not generative AI. I only do generative AI.
I think your company will be very successful.
You will be very successful.
Goodbye.
And he just walked out.
And that was like the end of everything.
That's so great.
Yeah.
Well, that's all he's doing now.
So this is the big bet, right?
So OK.
So I need to bust a myth.
I did not take money from Masa.
So he begged me to take money for years and we did not take it because he is a, he's,
what's the word I'm looking for?
He's a, he's a promiscuous investor.
So once he, once he invests in you, you should probably count on him and using your information
and investing in all of your competitors.
At least that's historically what he's done.
So I didn't go there, but then he just kept investing in all my competitors and they kept subsidizing
these markets. And then I'm like, maybe I should have just saturated, soaked up the money that was
there. So the one of the things you should think about, like when you look at like, oh, is OpenAI
taking a lot of money from a masa type situation, is it's a little bit of like a double-edged sword. If
you don't take that money, it goes somewhere else. But if you do take that money, just know that
whatever intelligence they get when they go through the process of giving you the money
and maybe hanging around the board or who knows what is going to be used to do other things.
And that is the nature of the MASA machine. So you're damned if you do damned if you don't. But you got to pick.
And if the money's going and it's flowing and access to capital is a strategic competitive
weapon or advantage, you must play ball.
Now we were able to, we did stuff with the Saudis before even Vision Fund existed.
They stroked a three and a half billion dollar check when that was like the biggest thing
that ever happened. So we were okay with not having the masa money, but that masa money then
went to all of our competitors. DoorDash.
And so in this open AI context, Travis, I mean, like just knowing what you know about AI,
is this going to be a competitive advantage for Sam to raise 40 billion? Where does it go
for Sam to raise 40 billion, where does it go when he's up against, we don't know what, in China, Microsoft, Alphabet, and Metta?
Well, look, I think this goes to some of the things that Shamath is saying, which is like,
if constraint is the mother of invention or whatever that euphemism is, the aphorism is, if that's the case, you get into a real weird spot
when you get over- Capitalized.
Over-capitalized. In the Uber model, like the war was subsidizing rides for market share,
essentially being the wrapper for transportation and using the parlance we were using earlier
in this discussion. So it was necessary. You're screwed if you don't. The question is, do you get to this place of
overcapitalized, too big, too bureaucratic, too loose, too weak, too soft? When you have an
open source model that's very smart and it's a thousand flowers blooming,
lots of innovation happening everywhere, could be an overwhelming force. Now, I think there's
going to be different sectors treated different ways where like going full stack in certain
industry sectors is going to matter. And then in other places having like a very sort of chaotic,
everybody does a little slice is going to be OK in other places.
And I think we could probably spend days or hundreds
or dozens of hours just talking about the nuances there.
Well, it seems like there's some degree of relationship
between the Stargate announcement with Mase and Sam
standing up there with Larry and then
Sacha showing up in the conversation as well.
And this raise and the idea that more hardware,
more infrastructure, faster creates a moat.
And I guess that's the real thing you have to believe,
which becomes harder to believe in the context
of what happened in the last week.
I personally think that these models are,
and I've said this for a while,
it doesn't make sense to have one large do-everything model.
This mixture of experts architecture,
ultimately you can think about taking a large model,
making two copies of it,
and then trying to shrink each model down to whatever the necessary so that you
run two models less frequently,
meaning that that combination of two models
uses less power and takes less time. And then you do the same thing again, and you shrink it down
to four and then 12. And eventually you have lots of smaller models, some of which in some cases are
experts at one thing like doing mathematics or reading or writing. But the reality is we don't
know how whether humans have kind of thought about the world the right way, that the AI may resolve to having smaller expert models that we don't really understand
why that's the expert on something, but you have a network of very small kind of things that work
together. And that ultimately leads to a commoditization, not just in kind of model cost
and in development and runtime, but also in what's needed. Do you really create much of
an advantage by having all these data centers?
Do you create much of an advantage?
This is the key point I think, Freberg,
is that you're not gonna get an advantage
by having more H100s at a certain point.
And the actual advantage is gonna be in the IP
and owning content.
And the really smart thing to do
would be for somebody to go buy Reddit, Quora,
the New York Times, the Washington Post, and Disney
and take all that IP and then not allow
other people to use it, sue the hell out of them every time they try.
Well, I take Washington Post off that list, but yes.
But I'll say it.
The New York Times comes off the list too.
Well, whatever. All those archives are definitely going to be, what would be great about those is
you could then, like a patent troll, then tell anybody else who's absorbed New York
Times stories historically or Disney IP historically, and you could just see the hell out of them. And then
you've got the best, most proprietary one and you're describing, you're describing
texts. So you're describing text content, which is a fraction of where this is important.
So video, I think you can recognize that Google's YouTube content library is probably a hundred
to 200 times larger
than the rest of the internet combined.
But they don't have the right to do it.
Well, they do actually.
So, Jason, you're such an old school copyright guy.
You're such an old school media guy, by the way.
Sorry, I believe in artists and their right to content.
We've had a series of conversations
that I feel very confident to tell you
that they do have the right in a good chunk of that content,
not in a lot of the
Copyrighted content that the big media companies have given them but a lot of user generated content
They do have the right and they are using it and they're legally doing it
And then there's the separate kind of body of content which I think comes for example from Tesla Tesla has an extraordinary advantage
That they were really pressured to put cameras on everything years ago and that gives them this ability to build models that do self-driving.
I think that there's a lot more data advantage that arises in certain industry segments than
others, and that's where the moat will lie.
That moat will allow you to actually build better products that get you a more persistent
advantage in gathering more data.
That's ultimately where I think this resolves to.
It may not necessarily be about who's got the biggest data center network.
Yeah. I mean, here's the thing, guys. At some point, the amount of data becomes the long
pole in the tent. At some point, the quality of the algorithms becomes a long pole in the tent,
and more compute is not going to change that. I don't think we're there yet. That's the one
thing that counters the cheap AI means more AI is, is there enough data
and or algorithms to make the more AI to make it work? I do agree with the siloing it and getting
expert and getting better in these ways. I think this is an interesting trade-off between some of
these variables. I got offered 2,500 bucks to put Angel, my book, into, because HarperCollins did a deal
with Microsoft. And so I'm thinking-
2,500 per year?
I think it's for three years is the license. And they just did this blanket license for
every book. They didn't look at your sales. They didn't look at how desirable it was.
It was just like a blanket deal. Everybody gets 2,500 bucks per book for three years.
And I think I'm going to just do it
just to support proper licensing, so that people can
start going down this path. But let's get into Doge. It's been
a I think we're in 10 days into this administration and Trump
formally established Doge, the Department of Government
Efficiency in an executive order. Apparently, Elon's been
spending a lot of time at the offices, a bunch of wins. Doge is claiming on the interwebs to be saving
American taxpayers around a billion dollars a day. That's $3 for every American every
day, about $1,000 a year in savings for each US citizen. And they claim they can triple
this. And so for a family five, that'd be about $15,000 a year,
maybe $60,000 during Trump's second term.
We got $36 trillion in debt.
Have fun with some numbers there if you like.
But the key announcement was very similar to the Twitter
execution, the ability for people to resign,
done in a very kind way.
Eight months of severance ish is being offered to
federal workers, they expect 5 to 10% of federal workers to take this buyout. And it's I mean,
this could be something like $100 billion in savings, eight months of severance is not actually
a legal concept that you can do. So these are some sort of buyouts.
And there's obviously some hand wringing about it, but I think they're off to a good start.
They've also been canceling leases as we talked about, you know, pre election, there is so much
space not being used, that the federal government is terminating a ton of stuff they own and going to sell it
and consolidating folks. And at the same time, all of this is happening. Everybody has to return
to office. Who wants to go first here with the sort of first 10 days of Doge?
I see some eggplant emojis in the group chat. First 10 days of
doge. How exciting.
What's that about? I'm adding you right now.
How are you not in the group chat? Get in the group chat.
I'm adding you right now.
Literally every time one of these hits the group chat, it's just hilarious. Eggplants.
People are like, oh my God, we're not burning taxpayer dollars. And the eggplant always
comes from freeberg first.
I'm outing him as an eggplant-er.
I'm a big Doge eggplant guy.
So much eggplant.
So Freeberg, tell us about how much eggplant you love this.
There's nothing that I would say
is particularly surprising in the first week.
A lot of this was kind of talked about
leading up to the inauguration.
Vivek and Elon published their piece
in the Wall Street Journal a couple of weeks ago.
They talked about the mechanisms of action
that they could utilize to kind of drive reduction in cost.
One of which was come back to the office.
Another one of which is, you know,
giving people a buyout offer.
And by the way, the buyout offer is not new.
Bill Clinton did the same thing during his presidency.
If you guys remember when he tried to balance the budget,
get to a surplus, which he did successfully.
And his intention was to actually reduce US death to zero
by the year 2013.
And he had a very specific economic and fiscal plan
for doing that, which he put into place.
Incredible era.
I think we're seeing them take the actions that they said they would do.
They said they would demand to employees, federal employees, come back to the office
and they assumed some degree of attrition from that.
And now the buyout offer.
And we'll see how far things go with the courts with respect to their ability to stop a legislative
or statutorily mandated spending.
There's a big question mark here on how much authority the executive branch has
in stopping spending and how much they're not allowed to stop because it's demanded by laws demanded by Congress and acts or laws that have passed.
And so that's going to be the big test here over the next couple of months.
A lot of lawsuits will fly.
And so that's going to be the big test here. Over the next couple of months, a lot of lawsuits will fly.
The courts will ultimately adjudicate.
And we'll see how far the Doge intention can take things.
And then there's a separate set of efforts
around legislative action here.
There's about a $2 trillion annual deficit
right now in the United States federal government, $2 trillion
a year.
And if you look at the D'Aulio book on why countries go broke,
there's a pretty simple kind of
arithmetic in there, which is not complicated. It's just at
the end of the day, the US needs to get our federal deficit down
below 3% of GDP, which means we've got to cut about a
trillion trillion one of spending. If we can do that,
then we're in kind of a more economically sound place. By the
way, really important point, which is in the D'Aulio interview,
as you cut spending, interest rates will come down
because right now there's a pretty significant sell-off
in treasuries and a lot of risk associated
with the US's ability to deliver its debt obligations
over the next 30 years,
which is why 30-year treasuries are at 5% right now.
Even though the Federal Reserve is cutting rates,
the rate on treasuries is going up. People are still selling off treasuries. That will stop.
It's also inflationary. It's also inflationary, Dave.
And it's inflationary, that's right.
For sure.
And so as we cut spending, we also will see that there will be less inflation and the US ability
to pay back their debt obligations over the next 30 years goes up, so the rates will come down.
And so there's actually a really nice kind of cyclical effect as these cuts start to
come into play.
The rate at which you can make the cuts actually affects the amount of cuts you have to make.
The faster you make the cuts, the less you have to cut.
And that's a really key kind of principle going into this, which I think we should expect
a big whirlwind of cutting in the next couple of months or an attempt to.
The courts will adjudicate what needs to be legislated, and then they're going to go to
Congress and start to try and get some of these cuts in. But I will tell you
once again, after our visit in DC last week, there was not a single member of Congress that I spoke
with who views cutting to be a mandate for them in the laws that they're trying to pass. They all
have a very different kind of agenda than Doge. Right. Look, this is really one of those interesting
things where it's like the difference between
legislature and executive branch is like Doge is really bringing it to life, is like what powers
and controls does the executive branch have to spend and not to spend, and especially to not spend
when it's been legislated to spend. This is where the action is. There's no law that says,
you can give a bunch of folks eight months of severance and they're gone and you don't
replace them. There's no law that says that. The executive branch, and again, I don't know
all the laws, the rules or laws about how they go about doing it, but let's say presumably they're
doing this and there's some legal backing behind it, they just go about doing it. But let's say, presumably, they're doing this, and there's some legal backing behind it. Like, they just go and do it, and now they're not
spending money. If it was really hard to hire people, and they could even make it harder to
hire people, do they fight bureaucracy with bureaucracy that it's harder to spend,
harder to hire people, harder to procure certain things that you're supposed to spend money on. You can
reduce the spend through a lot of very interesting nuanced rules that they're in control of.
Yes, some friction could slow things down. They're talking about putting competency tests in,
they're talking about giving people reviews, and maybe they have to hit some standards,
and the gentleman's riff. I mean, when you force people to come back to the office,
you're going to lose 5%, 10% of people and 10% take the
buyout and now all of a sudden we're saving things. I mean, it'll be interesting to see
if it's 5 or 10% on RTO. I mean, that it could be a lot more. I mean, what I'm hearing about these
buildings is that they are super, super empty, like next level empty. Let's just say I'm really glad I don't hold it like I'm an
owner that has a bunch of leases to the federal government right now.
And you know what the interesting thing about those leases, I was talking to the team at
Density which does people counting and building so they obviously are very interested in that.
The government is such a reliable client that they're all on one year leases.
So people don't, you know, do what they do with startups, which is force them to do five
or 10 years because they know, hey, this company could go out of business.
They're just like, yeah, yeah, we're just on a rolling year over year lease.
So you can actually just cut these.
It's going to flood the market.
Chamath, your thoughts on also the stopping of because they're obviously going for it.
They stopped all payments, which is a part of the playbook.
I saw a Twitter up close and personal, which is, hey, let's let's turn off subscriptions
and see, you know, if anybody's using these subscriptions, basically, obviously a judge
got involved in that, but aid going to other countries, you know, we're just starting to
look at what are we actually sending to other countries and for what purpose. And then there's a naming and shaming and maybe appealing
to the public through social media and saying, Hey, do you want this money going here when,
Hey, we have tragedies in our own country that need to be solved. We have healthcare, we have
houses burned down, we have infrastructure. And so maybe you could talk a little bit about hearts
and minds and winning those and what your general take is so far. I think that we have to remember
that we're only nine or 10 days into Doge. So the fact that we're already at a billion dollars a day
is really incredible. And there has really been no discernible impact. There has been a lot of
fissures of fake news and misinformation, but the real impacts have
been negligible to none since they started making those cuts. I think that Doge is a
three-layer onion. So layer one is the people. We have now given a pretty generous offer to folks.
And I think Elon said it, it was like basically the maximum allowed by these contracts but they
tried to do a very good thing there. The second, as you guys just said, the second layer of the
onion is going to be the infrastructure, all the buildings, all the physical plants that the
government owns and operates that may be empty, that may be idle and getting them back into
private hands so that they can be repurposed. That's going to save a ton of
money. But both of them will pale in comparison to the third layer of this onion, which is the IT and
the services and the spend. And what I mean by that is when you read how the department is set up,
at the center and nucleus of every single one of these Doge teams is an engineer.
And I think the reason is that they can get into these systems of record
and start to trace where the money is going.
And I think when you start to uncover through forensic analysis
where these dollars are going and how it's spent,
that's probably how you're going to close the gap from a
trillion to, and I suspect to be honest, it could be more than $2 trillion when it's all said and
done. That is an enormous amount of waste and it's unproductive. So I'm very excited for what happens
over this next little while. Just the transparency is going to be incredible. Guys, just for kicks, check this out, right? If we took 2019 spend, right? The year before COVID
and put it up against 2024 revenues, $500 billion surplus.
Wow. That's crazy.
Versus the $1.5 trillion deficit. So a two trillion dollar swing on like a four-
In four years.
Yeah, on a four trillion dollar budget.
That's all waste.
Well, a lot of it's-
It's all cray cray.
Remember, we've got a trillion dollars
a year of interest payments now.
I mean, this is guys, this is the thing,
like there's two deflationary things that we need.
One is Doge and two is where AI is going to take us
if it really does its thing. And that will keep us in an okay spot economically. But like we got
to, this spend has to go or we're in, we're in sort of like, we're in Greek, Greek territory,
if that makes sense. Yeah. And I think this is the popular support for this
is pretty incredible.
I'll just go through a couple of numbers with you.
Looking at what people agree with that Trump's doing early
on and what they disagree with.
Obviously, we talked about it last week, Chamath, pardoning
the January 6 protesters and ending requirements
for government employees to report gifts.
That's sort of like the Supreme Court thing.
These are tremendously unpopular.
And then if you go and you look at downsizing the federal government and imposing a hiring
freeze and requiring all federal employees to return to an office, these are incredibly
popular and Elon tweeted these these graphs out as well. So right now you
have Trump at the apex of his political popularity and you have these issues specifically in
a very polarized time as incredibly popular. He's also done an incredible job with the
border. That's another consensus based issue. So Trump now has downsizing the government
and controlling immigration and getting rid of violent immigrants as incredibly popular parts
of his mandate. And that's the big win for him. If you look at his popularity, Trump is massively
more popular than he was the first time around. He's at 49% compared to last time 44. He still
looks historically least popular president ever.
So my point in all of this is when you see Trump doing things like his meme coin or,
you know, taking on Pete Buttigieg today, all that kind of Trump 1.0 negativity grifting,
that's the stuff that's going to derail this.
But the stuff that's not going to derail it is focusing on the Trump 2.0 agenda.
And that is,
as somebody who was a never Trump, or as you all know, in the audience, and now somebody who is
supporting him relentlessly, that margin, that extra 10% of people who support him right now is
me, and other folks who are looking at the people who put around him, he has to stay with the 2.0
agenda as hard as it is and stay away from the Steve Bannon agenda and the grifting those are the things that will take this all apart. So that's my appeal to them.
I told everybody I give a letter grade I give them a B so far could do better but pretty good
less of the meme coin less of the drug you know we have to make sure that we're not dragging
dishwashers and teachers and people who've been here 20 years out of the country. And it's going to be a very deaf, important approach
here, if this is going to be sustained. And I think it's a
coin toss, if you will be able to maintain his popularity. And
what he did today with this, like, I don't know if you saw
the Pete Buttigieg, he was attacking him over this tragedy.
That's the kind of stuff people don't want less of that, please,
more of the Doge. That's my little rant. Can we talk to Travis about Waymo now? Travis, can I ask, have you taken a production
Waymo? Yes. What do you think about it? And do you think that's the future of transportation? And
how does Uber play into the self-driving car business now? I mean, look, it's funny because as you guys know, back in the day, 2015, 16, 17, we had our own
autonomous vehicles out there. And I remember the first one of ours that I took and I got in the
back and all I had was a stop button, a big red stop button that I could push if things got weird.
And I remember this is in Pittsburgh where we had our robotics division and autonomy
division at Uber.
I got out of that car and literally it's like I got off a roller coaster ride.
My legs were, I could not stand straight.
I was like a little wobbly because I was so freaked out and the adrenaline was pumping.
You get in a Waymo today and it's like you're not even thinking
twice. You're just like, it's all good. You just get in, you get out. Now part of it's just the
normalization. It's like it's just working and that normalizing matters in terms of the psychology
around it. We're just there. So it just works. Now, is it an optimized experience for ride sharing? No. The cyber cab is
the ultra destination for what it means to get transported across the city in a vehicle that is
not meant for a human to drive. No steering wheel, folks potentially even facing each other,
just a whole bunch of different formats.
The technology works, we know that. There are different ways to get to the technology.
I think that probably the most interesting thing
that we should be, or one of the most interesting things
to be thinking about, maybe there's a few.
First is cheap AI makes cheap autonomy.
Okay, so as cheap AI gets out there and proliferates and gets broadly
distributed, we should expect autonomy gets easier and easier and easier. And you see
some of the stuff that's happening with Tesla and FSD, their new models are like, I think
in a three month period, they went up like 10X in terms of performance, meaning a number
of miles per human intervention.
That's the thing that Elon's seeing right now because cheap AI, cheap good AI makes cheap good
autonomy. And that's a thing we need to connect the dots on. I think the thing then you go one
level past that you're like, okay, there's the possibility literally that autonomy just gets
easy and commoditized similar to what's happening to AI. The next part is, okay, there's the possibility, literally, that autonomy just gets easy and commoditized,
similar to what's happening to AI.
The next part is, OK, you get the hardware.
You're like, OK, manufacturing's hard.
That's interesting.
That could be a long pull in the tent.
I think that could be a place where Tesla, of course,
has huge advantage.
You then look at who are Waymo's partners.
Are they getting set up to do the right kind
of manufacturing and get
scale of cars out there? But then there's this dark horse that nobody's talking about, which is,
it's called electricity. It's called power. And all these vehicles are electric vehicles.
And if you said, I just did some quick back of the envelope calcs. If all of the miles in California
went EV ride sharing, you would need to double the energy capacity of California. Let's not
even talk about what it would take to double the energy capacity in the grid and things
like that in California. Let's not even go there. Even getting 20% more, 10% more is going to be a gargantuan five to 10 year exercise. Look, I live in LA. It's a nice
area in LA and we have power outages all the freaking time because the grid is effed up and
they're sort of upgrading it as things break. That's literally where we're at. In LA, one of the most affluent neighborhoods in LA. That's just where we
are. I think the dark horse hot take is combustion engine AVs because I don't know how you can go fast getting AV out there really, really,
really massive with the electric grid as it is.
What do you think about regulation in this regard?
Because obviously there was the cruise.
A person got hit by a regular car.
They dragged it.
The whole thing imploded.
We had at Uber the tragedy in Arizona where somebody was playing Candy Crush when they
were a safety driver.
You know, what is your outlook on this stuff rolls out and somebody gets hurt and then
that, you know, tens of thousands of cities that you brought Uber to, how receptive are
they going to be towards
this and what do you think the regulatory framework will be like?
You know, I think similar to how you get normalized, it's like you're used to getting in a car,
it's normalized psychologically and in the sort of public sphere, the public mindset,
you get used to it.
So like, we're getting to a place where these vehicles are probably safer than human driven
vehicles. So yes, there are mistakes, but they're just probably safer and people are just getting
used to it. And that's a big part of the cycle. So I think we're getting out of the hysteria and
we're getting into like, yeah, it's just great. Like talk to people who are using it and they feel safer from a, of course,
like I feel like we're going to get in less accidents, but also I feel safer
because there's like, there's less chance of like an interpersonal problem that
does happen, especially, you know, late at night, you know, when people are out
partying and things like this, there's just like, there is a level of safety on many different aspects to these vehicles.
For the driver.
Yeah.
For the driver.
Yeah, there's like, there's safety aspects across the board.
Sure.
Right?
What do you think about BYD and like you sort of mentioned everybody getting to autonomy
at the same time?
Obviously Waymo's got the biggest lead, Tesla's behind them, BYD and about 10 other providers
are out there doing this.
Do 10 players get there at the same time?
And then it's just who can incorporate these into their network?
And what do you think of the strategy that Uber's doing of, hey, we've got these eight
partners, we'll take everybody into the network, and we'll manage people vomiting the back
of cars, cleaning them, and charging them?
So look, I think the big issue you have with anything Chinese is will you be allowed to
bring it in the US?
Just period.
Like you maybe kind of can now.
What happens with tariffs?
Will there be blocks and bringing this kind of technology into the US?
What happens there?
I think that's a whole thing.
The bet that Uber makes is that whether consciously or subconsciously, it's like,
will AI, will cheap democratized AI happen? And if so, does that make cheap democratized autonomy?
Then you've got to line up your physical hardware partners, the car manufacturers.
Then you've got to say, okay, is the electricity where it's at and are there other bets to make
to make sure that I can charge my cars? So there is a huge real estate play here and fleet management
play of how do I electrify these plots of land known as parking lots and also set them
up so that robots can clean cars in a very, very efficient way.
There's a whole...
Fleet management, yeah.
Fleet management, yeah.
That's super interesting, Travis.
It's almost like the idea that we all talk about today is data centers, and data centers
need their own power substations in order to meet the power demands.
But if we do see a world of robotics, automation generally, and we've got these kind of moving robotic systems in our world,
they need to have a similar sort of like power demand net that probably looks like, hey,
they all go into their recharge building and they get recharged, whether they're a car
or a humanoid robot or a food delivery robot on the sidewalk or whatever, or a drone.
And they just kind of get recharged, huh?
Robots need actuators.
Do you know what you need for an actuator?
A permanent magnet.
You know what you need for permanent magnet?
Rare earths.
Who's the rare earth king?
Ex-China.
Greenland.
Greenland, let's go.
So guys, I think there's a couple interesting things.
One of them is gonna be,
how are these companies
thinking about real estate, electrifying that real estate in urban environments,
and roboticizing that real estate so that they can do the servicing, maintenance, etc. Look,
I guess it could be manual for a while. But hold on, can I put you on the spot?
Just go one level above it because merge the last two concepts together, we talked about the federal government doge, etc. Isn't
there the potential for just a complete surplus of physical inventory that exists in America?
Oh yeah. Big time.
Okay, so what does that mean for commercial real estate? Navigate around that because you
got to evade the falling knives first. Okay, okay, so let's just go down ridesharing lane.
So autonomous ridesharing lane.
You go down that lane, car ownership,
which is already dropping,
drops like a knife all the way down.
And there's this thing in cities,
which takes up 20 to 30% of all the land.
It's called parking.
It's no longer necessary because cars are getting utilized.
The cars that exist on the roads
are getting utilized 15X more than they were before per car. So you need hypothetically
one 15th number of cars, maybe you could say one fifth or one 10th because you want to be able to
surge to like rush hour and things like that. It depends on what kind of carpooling and things
like this are going on. Let's just call it 10x fewer cars, one-tenth the
land necessary for parking. At least one-tenth. Maybe it's less than that. Okay? So now you're
opening up 20% of the land in a city that just goes fallow.
But what should we do with that? And is there a demand for that land?
Well, look, I mean mean maybe it's the –
Should it be housing? And then don't we have to reevaluate all of the city planning today?
Because city planning today to your point works backwards from all these constraints
that are 1.0 constraints. Here's the traffic flow, here are the traffic patterns. Those
don't exist theoretically anymore or they would exist in a totally different way, right?
Yeah. I mean there's like a massive amount of creativity to say,
what can I do with that land with a high ROI? Right? Some people are like,
you're going to have farms, hydroponic farms in urban environments. I'm like,
That's not a bad idea if you want to have farm to table, healthy food. It's literally farm to table.
It's like a mile away from you.
Yeah. So there's some interesting ideas.
The land price has to really come crashing down and there's interesting ramifications if it were to do that.
You could imagine.
That's what I wanted you to say not to try to get you there.
You're leading the witness.
Well, that seems like the crazy thing that nobody is thinking about, which is in this push, this physical built inventory has so much value built up in the 401ks of individuals to the balance
sheets of huge pension funds, but that value could be very different.
Right. But the crazy part is, is it could just be electricity production and electric
capacity on the grid could be the gating factor that makes
it a slow burn, potentially. I'm just riffing here, guys.
Right, right, right.
Yeah, I know. It makes total sense. And if you want to see what happens when you have unlimited
land, if you live in Austin and you see the distance between San Antonio, Houston, and Dallas,
and Austin in that triangle, you get 30 minutes outside of the city centers.
There's just unlimited land and there's less regulation. And you know what's happened?
Housing prices and rents have come down two or three years in a row. So this could happen
in other major cities and if Doge has less regulation, you can build more. It could be
amazing for Americans to actually be able to afford homes again and maybe convert some of this space. Right. You go energy storage, electric grid
upgrades, modular energy capacity upgrades and production. This is going to be very,
very important. Right now, when we do this all the time, we have, of course, facilities all over
every major city in the US and really around the world. Utility
upgrades is the long pole in the tent in construction development in a lot of our
cities, not all cities, but in a lot of our cities.
The Fed held rates. They're getting close to the goal of 2%. I guess we're at 2.4%, 2.9% in terms of inflation. Any thoughts on
where we're at with the Fed deciding to not cut? And just you put it on the docket here, Jamath.
Any wider thoughts there? I would just say that the long end of the yield curve is basically telling
us that there's still a chance for inflation. So I think that the question is these next 30 or 60 days from the administration,
I think are basically, they're critical. And I think if Doge gets to the 3 billion a day
number quicker than people thought, there's going to be a lot of room for, I think, the
president to make a very valid argument that rates are too high for where they are and that we're going to
be able to have a lot more cost control in the expenses, which means that there'll be less need
to spend. It doesn't solve the problem that Yellen created. Yellen and Biden on the way out the door,
the biggest problem was that they put America in this very difficult position because they issued
so much short-term paper
that is extremely expensive. And as all of that rolls off, we have to go and finance a ton of
this debt at now 5%. So it's still- Nearly 30% of the debt is going to get refinanced this year.
And then it's like, what are these auctions going to look like guys? This is the thing we all got to
believe.
The last auction barely had 2X coverage.
And I think that that could take a lot of the energy out of the market.
Watch the Dalio interview because this is exactly the topic he covers. As we end up
needing to refinance this debt, the rates climb, the appetite isn't there and it becomes a spiral.
That's why we have to cut fast in terms of the deficit to basically attract the market.
Now, the market's moved a little bit, right?
So on January 13th, the 30-year treasury peaked at exactly 5%.
And it's come down today, it's at 4.77.
So a little bit of relief since that peak as kind of the administration's gone into
office and actually taken action.
But as more of this action is realized, if people do appreciate and doge it successful,
and the court's adjudication does allow reduction in spending, which I think is the intention,
I think we could see this rate drop from 478 much more significantly than where it is. And that'll
create a great deal of relief. Right. And Dave, it's like, it either does that,
significantly than where it is. And that'll create a great deal of relief.
And Dave, it's like, it either does that
or it really, really doesn't.
Or it does the exact super nasty really bad.
I got a text from someone who is pretty senior in capital
markets, thinks this is going to go to 5 and 1.5%
before it goes down.
So they think that there's going to be a little bit more
of a turbulent run ahead.
But the thing is, it's like that whole thing of like, it's going to get to five and a half
before it comes down. It's like it spirals on itself. It's like you got to print money
to then get to that place. And then the printing drives it for, you know, you get to that spiral.
The problem is if we go to five and a half percent, that's not 80 basis points. What
you really need to think about is the total tonnage of actual dollars that need to get repaid back. And if you look backwards,
that's effectively like 10% rates from 2000. Could you imagine what the economy would have done if
you had brought rates to 10, 11% 20 years ago? It would have crippled the economy. So we don't have
a lot of room here where you can walk rates up to five and a half, six percent without
a lot of things starting to break. This is why I actually think Doge will be successful because
as people internalize all of these things where every single congressperson, Freeberg, that may
have wanted their own benefit for their community, they'll have to take a step back because the
broader optimization for America just needs to take priority.
Right. But Shamath, it just doesn't work like that, man. My thing is like, I agree with
the notion, but I just don't believe that any individual congressperson will take responsibility
in this way.
No, they won't. They won't. But the question is, can they block it?
Yeah. Or put another way, again, the executive branch can slow roll spend in a lot of different ways. Except you cannot with Medicare and Social
Security. Discretionary spending is like 20%. The mandatory spending, Social Security, Medicare,
Medicaid, these are the larger outlay. And this where we come back to the fact that this will never get addressed until
it has to be because of the political suicide that arises. I just think this is where I think Elon's
fame can be helpful. And I mean very specifically this following idea, you know, that famous Sputnik
comment where NASA spent millions of dollars trying to engineer a pen that could write upside down,
and it turned out that in Sputnik the Russians just took a pencil. That is what we need to do
to the US government because I suspect even though there's a lot of mandated spend,
the real question that nobody knows the answer to is, is that spend useful? So even though
it's appropriated by Congress, there has to be a feedback loop that says,
you can just use a pencil. You don't need the upside down writing pen. And I think that if
there's anybody that can broadcast that to the world, it's him. And this is where I think Trump
gets enormous leverage by having Elon in the West Wing. But nobody else could give him. The rest
of us would just be chirping into the darkness. Yeah. This is the naming and shaming of government
waste that's actually going to work,
and the Doge account on Twitter is doing it. They're basically saying, hey, we're giving foreign aid
for this project, for that project. Is it going to be perfect every time? No. But you show an empty
office space, you show people not coming to work, you show people wasting money.
The condoms to Gaza?
Well, yeah, if that's even real, there's going to be a bunch of, you know, back and forth here.
But overall, if you keep naming and shaming each of these projects, and then, you know,
they were talking about blockchain, whatever, and suppose there's a report, Elon is at like
the government building, working on leases at the moment.
Like this stuff is going to be extraordinarily popular because you can just take the number
of 330 million Americans, and whatever you just saved, you can just
divide it by that number and tell every American how much they just paid less in taxes or how
much they just saved individually. The naming, shaming, and doing the back of the envelope
math for every American is going to work. Do we want to wrap maybe a little bit on this
tragedy in DC? Okay. What are your thoughts? We were talking with our friend, Sky Dayton,
who is very involved in aviation, and he's got a lot of blog posts he's done recently and he's got a company he invested in
to do pilot training. I'll share two things. One is anonymous. It's from a friend of mine,
gave it to me and said I could share it. There's a commercial pilot and I posted this. I'll just
read it. Honestly, DCA is the sketchiest airport we fly into.
I feel like the controllers there play fast and loose, hence the periodic runway incursions.
I've said to every first officer in my threat briefings that we both need to be on red alert
at all times there.
DCA calls out helo traffic, helicopter traffic, and vice versa all the time, but it's borderline
impossible to see
them when you're bombing along at 150 miles per hour. I mean, that's from a pilot that is not,
I don't think he has any incentive to sugarcoat things. And then I just wanted to read a message
from Brian Yutko, who's the CEO of WISC, who's building a lot of these autonomous systems. He
said, first, auto traffic collision avoidance systems
do exist. Right now, these aircraft will not take control from the pilot to save the aircraft,
even if software and systems on the aircraft know that it's going to collide. That's the
bit flip that needs to happen in aviation. Automation can actually kick in and take over even in piloted
aircraft to prevent a crash. That's the minimum of where we need to go. Some fighter jets have
something called automatic ground collision avoidance systems that do exactly this when
fighter pilots pass out and it's possible for commercial. And then the second he said is we need to have better ATC, air traffic control, software and automation. Right now, we use VHF radio communications for safety
and for critical instructions. And that's kind of insane. We should be using data links, etc.
The whole ATC system runs on 1960s technology. They deserve better software and automation in the control
towers. It's totally ripe for change. The problem is that attempts at reform have failed. So I just
wanted you guys to have that one from this commercial pilot and then two from Brian Yutko
who I think understands this issue really well. There's so much opportunity here to make this
better. This should have never happened. Our other friend, Skye Dayton, has been pushing really hard
for the US government to do advanced pilot training.
One of the things that he says constantly
is just that a lot of the pushback is just union rhetoric
around what they perceive the right thing
for their constituency is.
And hopefully this starts this conversation
because I think guys like Sky, guys like Brian
are working on this next level of autonomous solution that can just make flying totally,
totally safe beyond what it was. The crazy stat is that we haven't had a commercial
airline disaster in the United States in almost 25 years. Isn't that incredible?
I think it was 15, yeah. It's looking like pilot error here. And there also seems to be some
question of why these Apaches
are flying around this really crowded airspace.
And it seems like they're shuttling politicians around.
And maybe that's not the best idea in this really dense area,
as your pilot friend was referring to, Chamath.
So, God, thoughts and prayers and all that stuff
for the families of the people who died.
It's just terrible tragedy.
Terrible tragedy.
Yeah.
It's really just, this is an area to invest money and use the private sector and all this
incredible innovation that's available to upgrade these systems and infrastructure.
There's been another amazing episode of the All In Podcast.
Thanks Travis for joining us.
Thank you TK.
Thanks to the Czar for coming in.
That was a lot of fun guys. First time. This is my first time on a podcast ever. Yes. You all came right in. You were great.
You came back any time. You were great, man. I appreciate it. Appreciate that. Very
based. It's just going to like it. Tell us what you think. And we'll see you all next time.
Love you, boys. Bye-bye. We'll let your winners ride Rain Man David Sack
And it said we open source it to the fans and they've just gone crazy with it. Love you. That's the queen of Oh Besties are back This is my dog taking a notice in your driveway
Sex
Oh man
Oh man
My avid Azure will meet me at the hospital
We should all just get a room and just have one big huge orgy cause they're all just useless
It's like this sexual tension that they just need to release somehow
What? You're the bee
What? You're the bee
Bee?
What?
We need to get merch. The keys are back.
I'm doing all in.