All-In with Chamath, Jason, Sacks & Friedberg - AI Doom vs Boom, EA Cult Returns, BBB Upside, US Steel and Golden Votes
Episode Date: May 31, 2025(0:00) Bestie intros! (1:25) The AI Doomer Ecosystem: goals, astroturfing, Biden connections, effective altruist rebrand, global AI regulation (25:17) Doom vs Boom in AI: Job Destruction or Abundance?... (52:44) Big, Beautiful Bill cleanup and upside: DOGE angle, CBO issues (1:17:14) US Steel/Nippon Steel deal: national champions and golden votes Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect Referenced in the show: https://nypost.com/2025/05/28/business/ai-could-cause-bloodbath-for-white-collar-jobs-spike-unemployment-to-20-anthropic-ceo https://polymarket.com/event/us-enacts-ai-safety-bill-in-2025 https://www.aipanic.news/p/the-ai-existential-risk-industrial https://www.semafor.com/article/05/30/2025/anthropic-emerges-as-an-adversary-to-trumps-big-bill https://x.com/nypost/status/1760623631283954027 https://www.telegraph.co.uk/news/2024/02/23/google-gemini-ai-images-wrong-woke https://www.thefp.com/p/ex-google-employees-woke-gemini-culture-broken https://www.campusreform.org/article/biden-admins-new-ai-executive-order-prioritizes-dei/24312 https://x.com/chamath/status/1927847516500009363 https://www.cnbc.com/2025/05/13/microsoft-is-cutting-3percent-of-workers-across-the-software-company.html https://x.com/DavidSacks/status/1927796514337746989 https://x.com/StephenM/status/1926715409807397204 https://x.com/neilksethi/status/1926981646718206243 https://thehill.com/opinion/finance/5320248-the-bond-market-is-missing-the-real-big-beautiful-story https://x.com/chamath/status/1928536987558105122 https://x.com/chamath/status/1927373268828266795 https://fred.stlouisfed.org/series/FYFRGDA188S https://fred.stlouisfed.org/series/FYONGDA188S https://www.cnbc.com/2025/01/03/biden-blocks-us-steel-takeover-by-japans-nippon-steel-citing-national-security.html https://truthsocial.com/@realDonaldTrump/posts/114558783827880495
Transcript
Discussion (0)
All right, everybody. Welcome back to the All In Podcast, the number one podcast in the world.
You got what you wanted, folks.
The original quartet is here live from D.C.
with the great shirt.
Is that is your haberdasher making that shirt or is that a Tom Ford?
That white shirt is so crisp, so perfect.
David Sachs, you're talking about me.
Your czar, your czar.
Oh, I'll tell you exactly what it is.
I'll tell you what it is. You can tell me if it's right.
Brioni. Yes, of course, Brioni. You're czar. I'll tell you exactly what it is. I'll tell you what it is. You can tell me if it's right. Brioni.
Yes, of course.
It's Brioni.
Being Brioni spread collar.
Look at that.
How many years have I spent being rich?
When a man turns 50, the only thing you should wear is Brioni.
The stitching is...
Looks very luxurious.
That's how Chamath knew, right?
Chamath, how did you figure it out?
The stitching?
It's just how it lays with the collar.
To be honest with you, it's the button catch.
The Brioni has a very specific style of button catches. If you don't know what that means is going into its fourth year, September 7th
through 9th.
And the goal is, of course, to have the world's most important conversations, go to allin.com
slash yada yada yada to join us at the summit. All right. It's a
lot in the docket, but there's kind of a very unique thing
going on in the world, David, everybody knows about AI
doomerism, basically people who are concerned, rightfully so
that AI could have some, you know, significant impacts on the
world. Dario Amodei said he could see employment spike to
10 to 20% in the next couple of years,
the 4% now as we've already talked about here.
He told Axios that AI companies and government needs to stop sugarcoating what's coming.
He expects a mass elimination of jobs across tech, finance, legal and consulting.
Okay, that's a debate we've had here.
And entry level workers will be hit the hardest.
He wants lawmakers to take action and more CEOs to
speak out. Polymarket thinks regulatory capture via this AI safety bill is very unlikely.
USNX AI safety bill in 2025 currently stands at a 13% chance. But, Saks, you wanted to
discuss this because it seems like there is more at work than just a couple of technologists
with I think we'd all agree, there are legitimate concerns
about job destruction or job and employment displacement
that could occur with AI.
We all agree on that.
We're seeing robo taxis start to hit the streets.
I don't think anybody believes that being a cab driver
is gonna exist as a job 10 years from now.
So there seems to be something here about AI doomerism,
but it's being taken to a different level
by a group of people maybe with a different agenda, yeah?
Well, first of all, let's just acknowledge
that there are concerns and risks associated with AI.
It is a profound and transformative technology,
and there are legitimate concerns about where am I lead?
I mean, the future is unknown, and that can be kind of scary.
Now, that being said,
I think that when somebody makes a pronouncement
that says something like 50% of white collar jobs
are gonna be lost within two years,
that's a level of specificity that I think is just unknowable
and is more associated with an attempt to grab headlines.
And to be frank, if you go back and look at Anthropix announcement or Dario's announcement,
there is a pattern of trying to grab headlines by making the most sensationalist version
of what could be a legitimate concern.
If you go back three years ago, they created this concern that AI models
could be used to create bio weapons. And they showed what was supposedly a sample,
I think, of claw generating an output that can be used by a bioterrorist or something like that.
And on the basis of that, it actually got a lot of play. And in the UK, Rishi Sunak got very interested in this cause.
And that led to the first AI safety summit at Bletchley Park.
So that sort of concern really drove some of the initial AI
safety concerns.
But it turns out that that particular output
was discredited.
It wasn't true.
I'm not saying that AI couldn't be used or misused to maybe create a bio weapon
one day, but it was not an imminent threat in the way that it was portrayed. There've been other
examples of this. Obviously, people are concerned about could the AI develop into a super intelligence
that grows beyond our control? Could it lead to widespread job loss? I mean, these are legitimate
things to worry about, but I think these concerns are being hyped up to
a level that there's simply no evidence for. And the question
is why. And I think that there is an agenda here that people
should be concerned about.
So let's start with maybe, Freiburg, things that we all
agree on here, there are millions of people who drive trucks
and boobers and lifts and door dashes.
You would, I think, agree the majority of that work
in but five to 10 years, just to put a number on it,
will be done by self-driving robots, cars, et cetera, trucks.
Yeah, Dave?
I think that might be the wrong way to look at it, or I wouldn't look at it that way. And maybe I'll just frame it a different way.
Please.
If I'm deploying capital, let's say I'm a CEO of a company,
and I can now have software that's written by AI,
does that mean that I'm going to fire
80% of my software engineers?
Basically, it means one software engineer can now fire up a company. that's written by AI, does that mean that I'm going to fire 80% of my software engineers?
Basically, it means one software engineer can output, call it 20, 50 times as much software
as they previously could by using that software generation tool. So the return on the invested
capital, the money I'm spending to pay the salary of that software engineer,
is now much, much higher. I'm getting much more out of that person because of the unlocking of
the productivity because of the AI tool that I previously could.
So when you have a higher ROI on deployed capital, do you deploy more capital or less capital?
Suddenly, you have this opportunity to make 20 times
on your money versus two times on your money.
If you have a chance to make 20 times on your money,
you're gonna deploy a lot more capital.
And this is the story of technology going back
to the first invention of the first technology
of the caveman.
When we have this ability to create leverage,
humans have a tendency to do more and invest more, not less.
And I think that's what's about to happen.
I think we see this across the spectrum.
People assumed, oh my gosh,
software can now be written with one person.
You can create a whole startup.
You don't need to have venture capital anymore.
In fact, what I think we're gonna see
is much more venture capital flowing into new tech startups,
much more capital being deployed because the
return on the invested capital is so so so much higher because
of AI. So generally speaking, I think that the premise that AI
destroys jobs is wrong because it doesn't take into account the
significantly higher return on invested capital, which means
more capital is going to be deployed, which means actually
far more jobs are going to be created far more work is going
to get done. And so I think that the counterbalancing effect is
really hard to see without taking that zoomed out perspective. To respond to Sachs's point,
I do think anytime you see a major change, socially, societally, there's a vacuum, how's
the system going to operate in the future? And anytime there's a vacuum in the system,
a bunch of people will rush in and say, I know how to fill that vacuum.
I know what to do because I am smarter, more educated, more experienced, more knowledgeable, more moral.
I have some superiority over everyone else, and therefore I should be in a position to define how the new system should operate.
And so there's a natural kind of power vacuum that emerges anytime there's a major transition like this.
And there will be a scrambling and a fighting
and a whole bunch of different representation.
Typically fear is a great way of getting into power
and people are gonna try and create new control systems
because of the transition that's underway.
Okay, Chamath.
You can see us around the world.
Yeah, I mean, so Chamath, it's pretty clear,
you know, Freeberg didn't answer this question specifically,
so I'm gonna give it to you again, you would
agree, jobs like driving things are going to go away. If we had
to pick a number somewhere between five and 10 years, the
majority of those would go away. He's positioning, hey, a lot
more jobs will be created, because there'll be all these
extra venture capital and opportunities, etc. But job
displacement will be very real. And we're seeing, I think, job displacement.
Now you had a tweet recently, you know, you were talking about entry level jobs and how
that seems to be going away in the white collar space. So where do you land on
job displacement? Freiburg's already kind of given the big picture here, but let's step back to
for people who are listening, who have relatives who drive Uber or a truck or are graduating from college
and want to go work at a, you know, I don't know,
the Magnificent Seven or in tech and they're not hiring.
And we know the reason they're not hiring
because they're leaning into AI.
So let's talk about the job displacement in the medium term.
I'm going to ignore your question.
And I'm going to answer-
Why should you be any different than the other?
So I...
Now, I'm content on this podcast.
I...
I...
There's two people not wanting to answer the question about job displacement.
Interesting trend.
Hold on.
No, no, no.
We'll go back to that.
Let me start by just saying that it seems that these safety warnings tend to be pretty
coincidental with key fundraising moments in Anthropics Journey.
So, let's just start with that.
And if you put that into an LLM and try to figure out if what I just said was true,
it's interesting, but you find it's relatively accurate. I think that there is
a very smart business strategy here. And I've said a version of this about the other companies at the
foundational model layer that aren't Meta and Google because
Meta and Google frankly sit on these money gushers where they just generate so much capital that they
can fund these things to infinity. But if you're not them, so if you're OpenAI or if you're
Anthropic, you have to find an angle. And I think the angles are slightly different for both,
but I think what this suggests is that there's a pattern
that exists. And I think that that explains some of the framing of what we see in the press, Jason,
and why we get these exaggerated claims. Perfect. So there are people who are doing
this for nefarious reasons, is I guess where you're sort of getting at here. It's a way to pump up the market.
No, it's not nefarious at all. It's smart. It's smart. If you fall for it, it's up to
you.
Yeah. Okay. Well, there's also an industrial complex according to some folks that are backing
this. If you've heard of effective altruism, that was like this movement of a bunch of, I don't know, I guess they consider themselves intellectual sacks.
And they were kind of backing a large swath of organizations
that I guess we would call in the industry, astroturfing, or
what do they call it when you make so many of these
organizations that they're not real in politics and flooding
the zone perhaps. So if you were to look at this article here,
Nick, I think you have the AI existential risk, industrial
complex graphic there, it seems like a group of people,
according to this article, have backed to the tune of 1.6
billion, a large number of organizations to scare the bejesus
out of everybody and make
YouTube videos, TikToks, and they've made a map of it.
There's some key takeaways here from that article where it says here that it's an inflated
ecosystem.
There's a great deal of redundancy, same names, acronyms, logos with only minor changes, same
extreme talking points, same group of people just with different titles, same funding source. There's a funding source called Open Philanthropy, which was funded
by Dustin Moskovitz, who is one of the Facebook billionaires. Chamath, you worked with him,
right? I mean, he was, wasn't he like Zuck's roommate at Harvard or something? And he's
one of the first engineers who made a lot of money. So he's an EA and he funded this group called Open Flamethrough, which then has become the
feeder for essentially all of these other organizations, which are almost different
fronts to basically the same underlying EA ideology.
And what's interesting is that the guy who set this up for Dustin Holden-Karnofsky, who
is a major effective altruist and was doling out all the money,
he's married to Dario's sister.
She's I guess associated with EA and she was one of the co-founders of Anthropic.
These are not coincidences.
The reality is there's a very specific ideological and political agenda here.
Now, what is that agenda?
It's basically global AI governance, if you will.
They want AI to be highly regulated,
but not just at the level of the nation state,
but let's say internationally, supernationally.
To what end?
Well, if you just do a quick search
on global compute governance,
it'll tell you what the key aspects are.
So number one, they want regulation of
computational resources. This includes access to GPUs. They want AI safety and security regulation.
They want international, you call them globalist agreements. And they want ethical and societal
considerations or policy built into this. Now, what does that sound like? That sounds a lot to me,
like what the Biden administration was pursuing. Now, what does that sound like? That sounds a lot to me, like what the Biden administration was pursuing.
Specifically, we had that Biden executive order on AI,
which was 100 pages of Bernsen regulation
that was designed to promote AI safety,
but had all these DEI requirements.
So, it led to woke AI.
You remember when Google launched
Black George Washington and so forth.
They had the Biden diffusion rule,
which created this global licensing framework
to sell GPUs all over the world.
So extreme restrictions on proliferation of servers
of computing power.
They created what's called the AI Safety Institute.
And they again, fostered these international AI summits.
So if you actually look at what the Biden administration was tangibly
doing in terms of policy, and you look at what EA's agenda is with respect to
global compute governance, they were pushing hard on these fronts.
And now if you look at the level of personnel, there are very, very powerful
Biden staffers who
now all work in Anthropic.
So probably the most powerful Biden staffer on AI over the past four years was a lawyer
named Tarun Chhabra, and he now works at Anthropic for Dario.
Elizabeth Kelly, who was the founding director of the AI Safety Institute in the
government, now works at Anthropic.
Like I mentioned, Dario's sister is married to Holden Karnofsky, who doles out all the money to these EA organizations.
So if you were to do something like create a network map, you would see very quickly that there's three keynotes here.
There's the effective altruist movement of which Sam
Bankman Fried is the most notable member, but which I think Dustin Mosvick is now the main funder.
There's the Biden administration and like the key staffers and then you've got
Anthropic and it's a very tightly wound network. Now, why does this matter?
Well, let's get, yeah, also the goals I think is-
Yes. Well, the goal, like like I said is global compute governance.
It's basically establishing national and then international regulations of AI.
But they would claim, let's just pause here for a minute, they would claim the reason they're doing it
and so we'll say if we believe this or not, but they are concerned about job destruction in the
short term. They're also concerned,
as science fiction as it is, that the AI, when we get to like a sort of generalized superintelligence,
is going to kill humanity, that this is a non-zero chance. Elon has said this before.
They've sort of taken it to a, almost like a certainty. We're going to have so many of
these general intelligences.
Isn't it odd that they only believe that when they're raising money?
Well, that's what I'm sort of getting at.
I think they believe it all the time, but maybe the press releases are time for the fundraisers.
But yet they're building a really great product.
Yeah, look, I mean...
It is a great product. Claude kicks ass.
I'm more interested in the political dimension of this. I'm not bashing a specific product or company. But look, I think that there is some non-zero risk
of AI growing into a super intelligence that's beyond our control. They have a name for that.
They call it X-Risk or existential risk. I think it's very hard to put a percentage on that.
I'm willing to acknowledge that is a risk. I think about that all the time and I do think
we should be concerned about it. But there's two problems, I think, with this approach.
Number one is X-Risk is not the only kind of risk.
I would say that China winning the AI race is a huge risk.
I don't really want to see a CCP AI running the world.
And if you hobble our own innovation, our own AI efforts in the name of stomping out
every possibility of X-Risk, then you probably end up losing the AI race to China because they're not going to abide by those same regulations.
So again, you can't optimize for solving only one risk while ignoring all the others.
And I would say the risk of China winning the AI race is, you know, it might be like
30%, whereas I think X- X risk is probably a much lower percentage
So there are there there are other risks to worry about and I do think that they are single-mindedly focused on
Scaring people with some of these headlines around first was the bioweapons and it was the super intelligence now
It's the job loss and I think it's a tried and true
tactic of people
Who want to give more power to the government
to scare the population, right?
Because if you can scare the population
and make them fearful,
then they will cry out for the government
to solve the problem.
And that's what I see here
is that you've got this elaborate network
of front organizations,
which are all motivated by this EA ideology.
They're funded by a hardcore leftist.
And by the way, I became aware of Dustin's politics because of the
chase of Boudin recall.
I found out that he was a big funder of chase of Boudin.
Remember that's Dustin Mossmas and Kerry Tuna, his wife.
Also Reed Hastings just joined the board of, of Anthropic.
Remember when he, back in 2016, tried to drive
Peter Thiel off of the board of Facebook for supporting Trump. So, you know, these are like
committed leftists, they're Trump haters. But the point is that these are people who fundamentally
believe in empowering government to the maximum extent. More government, yeah. More government,
and empowering government to the maximum extent. More government. More government and empowering government
to the maximum extent.
Now, my problem with that is,
I actually think that probably
the single greatest dystopian risk associated with AI
is the risk that government uses it to control all of us.
To me, like you end up in some sort of Orwellian future
where AI is controlled by the government.
And out of all the risks we've talked about,
that's the only one for which I've seen tangible evidence.
So in other words, if you go back to last year
when we had the whole woke AI,
there was plenty of evidence that the people
who were creating these products were infusing
their left-wing or woke values into the product
to the point where it was lying to all of us and it was rewriting history.
And there was plenty of evidence that the Biden EO was trying to enshrine that idea.
It was basically trying to require DEI be infused into AI models.
And it wanted to anoint two or three winners in this AI race.
So I'm quite convinced that prior to Donald Trump
winning the election, we were on a path
of global compute governance where two or three big AI
companies were gonna be anointed as the winners.
And the quid pro quo is that they were gonna infuse
those AI models with woke values.
And there was plenty of evidence for that.
You look at the policies, you look at the models,
this was not a theoretical concern.
This was real.
And I think the only reason why we've moved off of that trajectory is because of Trump's election,
but we could very easily be moved back onto that trajectory. If you were to look at all three
opinions here and put them together, they could all be true at the same time. You've got a number
of people, some might call useful idiots, some might call just people with God complexes who
believe they know
how the world should operate. Effective altruism kind of
falls into that, oh, we can make a formula that that's their
kind of idea, where we can tell you where to put your money,
rich people in order to create the most good and, you know,
where are these enlightened individuals with the best view
of the world? They might be who knows, maybe they're the
smartest kids in the room, but they're kind of delusional. The
second piece I'll do here is, I think you're absolutely correct, Jamath, that there are people
who have economic interests, who are then using the who's useful
idiots and or delusional people with God complexes to serve
their need, which is to be one of the three winners. And then
sacked in her into all of that is they have a political
ideology. So why not use these people with delusions of grandeur
in order to secure the bag for their companies,
for their investments and secure their candidates
into office so that they can block further people
from getting H100s, because they literally want to.
By the way, that's the part that's very smart
about what they're doing, because, you know,
it's not like they're illiquid. They're full of liquidity
in the sense that you're bringing in people that are very technically capable. You're setting up
these funding rounds where a large portion goes right back out the door via secondaries. There's
all these people that are making money having this worldview. To your point, Jason, it's going to
cement that worldview and then they are going to propagate it even more aggressively into the world.
So I think the threshold question is, should you fear government over regulation or should you fear autocomplete?
And I would say you should not be so afraid of the autocomplete right now.
It may get so good that it's an AGI, but right now it's an exceptionally good autocomplete.
Yeah, and I just think that, again, it's a tried and true tactic of people who want to
give immeasurably more power to the government to try and make people afraid,
and they stampede people into these policies. To them. To them. To them.
It gives them power, exactly. Now, why do I think this is important to talk about?
On last week's show, I talked about the trip to the Middle East and how we started doing
these AI acceleration partnerships with the Gulf states
who have a lot of resources, a lot of money,
and they're intensely interested in AI.
And the Biden administration was pushing them away.
It basically said, you can't have the chips,
you can't build data centers.
And it was pushing them into the arms of China.
The thing that I thought was so bizarre
is that the various groups and organizations
and former Biden staffers who wrote this policy have been agitating in Washington and they've
been trying to portray themselves as China hawks.
And I'm like, wait, this doesn't make any sense because this policy, again, there's
basically two camps in this new Cold War.
It's US versus China.
You can pull the Gulf States into our orbit or you can drive them into China's orbit.
So this to me just didn't make any sense.
And what's happened is that frankly, you've got this EA ideology that's really motivating
things, which is a desire to lock down compute, right?
They're afraid of proliferation.
They're afraid of diffusion.
That's really their motivation.
And they're trying to rebrand themselves as China hawks because they know that in the Trump administration, that idea is just not going to get much purchase.
Right?
And your position as czar is a level playing field.
People compete and the good guys, you know, the West should be supported to hit artificial
general intelligence as fast as possible.
So the bad guys, China, don't get it first.
That's a open competition.
I don't know if I would frame it around AGI specifically,
but what I would say is that, look,
I think our policy should be to win the AI race
because the alternative is that China wins it
and that would be very bad for our economy and our military.
How do you win the AI race?
You gotta out innovate, got to have innovation.
That means we can't have over regulation red tape.
We've got to build out the most AI infrastructure, data
centers, energy, which includes our partners.
And then third, I think it means AI diplomacy,
because we want to build out the biggest ecosystem.
We know that biggest app store wins, biggest ecosystem wins.
And the policies under the Biden administration
were doing the opposite of all those things.
But again, you have to go back to what was driving that. And it was
not driven by this China hawk mentality. That is now a convenient rebranding. It was driven
by this EA ideology, this doomerism. And so this is why I'm talking about it is I want
to expose it because I think a lot of people on the Republican side don't realize where
the ideology is really
coming from and who's funding it. They're obviously Trump haters and they need to be
loomer quite frankly. They need to be loomer. I mean, you know,
Freeburg, I want to come back around again because I respect your opinion on, you know,
come back around again, because I respect your opinion on, you know, how close we are to turning certain corners,
especially in science. So I understand big picture, you
believe that the opportunity will be there, hey, we got
people out of fields, you know, in the agricultural revolution,
we put them into factories, industrial revolution, then we
went to this information revolution. So your position is
we will have a similar transition transition and it'll be okay.
But do you not believe that the speed, because we've talked about this privately and publicly on the pod,
that this speed, the velocity at which these changes are occurring, you would agree, are faster than the industrial revolution,
much faster than the information revolution.
So let's one more time talk about job displacement. And I think the real concern
here for a group of people who are buying into this ideology
is specifically unions job displacement. This is something
the EU cares about. This is something the Biden
administration cares about. If truck drivers lose their jobs,
just like we went to bat previously for coal miners, and
there were only 75,000 or 150,000 in the country
at the time, but it became the national dialogue. Oh my god, the coal miners. How fast is this
going to happen? One more time on drivers specifically? Okay, coders, you think there'll
be more code to write, but driving, there's not going to be more driving to be done. So
is this time different in terms of the velocity of the chain and the job displacement in your mind,
Friedberg?
The velocity is greater, but the benefit will be faster. So the
benefit of the Industrial Revolution, which ultimately
drove lower price products and broader availability of
products through manufacturing, was one of the key outputs of
that revolution, meaning that we created a consumer market that
largely didn't exist prior.
Remember, prior to the Industrial Revolution, if you
wanted to buy a table or some clothes, they were handmade,
they were kind of artisanal, suddenly the Industrial
Revolution unlocked the ability to mass produce things in
factories. And that dropped the cost and the availability and
the abundance of things that everyone wanted to have access
to, but they otherwise wouldn't have been able to afford.
So suddenly everyone could go and buy blankets and clothes
and canned food and all of these incredible things
that started to come out of this industrial revolution
that happened at the time.
And I think that folks are underestimating
and under realizing the benefits at this stage
of what's gonna come out of the AI revolution
and how it's ultimately going to benefit people's availability of
products, cost of goods, access to things. So the
counterbalancing force J Cal is deflationary, which is, let's
assume that the cost of everything comes down by half.
That's a huge relief on people's need to work 60 hours a week,
suddenly, you only need to work 60 hours a week.
Suddenly you only need to work 30 hours a week,
and you can have the same lifestyle
or perhaps even a better lifestyle than you have today.
So the counter argument to your point,
and I'll talk about the pace of change
and specific jobs in a moment,
but the counter argument to your point
is that there's gonna be this cost reduction
and abundance that doesn't exist today.
Give an example.
Let's give some examples that we could say.
Automation and food prep.
So we're seeing a lot of restaurants
install robotic systems to make food.
And people are like, oh, job loss, job loss.
But let me just give you the counter side.
The counter side is that the cost of your food drops in half.
So suddenly, all the labor costs that's
built into making the stuff you wanna pick up,
everyone's freaking out right now about inflation.
Oh my God, it's $8 for a cup of coffee.
It's $8 for a latte.
This is crazy, crazy, crazy.
What if that dropped down to two bucks?
You're gonna be like, man, this is pretty awesome.
With good service and good experience
and don't make it all dystopian,
but suddenly there's gonna be this like incredible
reduction or deflationary effect in the cost of food.
And we're already starting to see automation play its way in the food system to bring inflation down. And that's
going to be very powerful for people. Shout out to Eatsa Cloud Kitchens in Cafe X. We
all took swings at the bat at that exact concept is that it could be done better, cheaper,
faster. One of the amazing things of these vision action models that are now being employed
is you can rapidly learn using vision systems and then deploy automation systems
in those sorts of environments where you have a lot of kind
of repetitive tasks that the system can be trained
and installed in a matter of weeks.
And historically that would have been a whole startup
that would have taken years to figure out how to get
all these things together and custom program
and custom code it.
So the flip side is like when Uber hit,
those people were not drivers.
Think about the jobs that all those people were not drivers. Think about
the jobs that all those people had prior to Uber coming to market. And then the reason they drove
for Uber is they could make more money driving for Uber or now driving and the flexibility or
door dash and the flexibility. So their lifestyle got better. They had all of this more control in
their life, their incomes went up. And so there's a series of things that you are correct won't make
sense in the future
from a kind of standard of work perspective,
but the right way to think about it
is opportunity gets created.
New jobs emerge, new industry, new income, costs go down.
And so I keep harping on this that it's really hard today
to be very prescriptive to Sachs's point
about what exactly is around the corner,
but it is an almost certainty that what is around the corner
is more capital will be deployed.
That means the economy grows.
That means there's a faster deployment of growth
of new jobs, new opportunities for people to make more money,
to be happier in the work that they do.
And on the flip side being things are gonna get cheaper.
So I mean, we're waxing philosophical here,
but I think it's really key because you can focus
on the one side of the coin and miss the whole other.
And that's what a lot of journalists and commentators
and fear mongers do is they miss that other side.
Got it.
Well said, Freeberg, well said.
I think I've heard Satya turn this question around
about job loss saying, well, do you believe
that GDP is gonna grow by 10% a year?
Because what are we talking about here?
In order to have the kind of disruption
that you're talking about, where, I don't know,
10 to 20% of knowledge workers end up losing their jobs,
AI is gonna have to be such a profound force
that it's gonna have to create GDP growth
like we've never seen before.
That's right.
So it's easier for people to say,
oh, well, 20% of people are gonna lose their jobs,
but wait, we're talking about a world where the economy is growing 10% every year?
Do you actually believe that's going to happen?
That's more income for everyone.
That's new jobs being created.
It's an inevitability.
We've seen this in every revolution.
Prior to the Industrial Revolution, 60% of Americans worked in agriculture.
And when the tractor came around and factories came around, those folks got to get out of doing manual labor
in the fields where they were literally, you know,
tilling the fields by hand.
And they got to go work in a factory
where they didn't have to do manual labor to move things.
Yeah, they did things in the factory with their hands,
but it wasn't about grunt work in the field all day
in the sun.
And it became a better standard of living.
It became a job.
And today we think about-
It became a five day work week.
It went from a seven day, six, six or seven day work week to five.
100 hours a week to 45, 50 hours a week.
And now I think the next phase is we're gonna end up
in less than 30 hours a week with people making more money
and having more abundance for every dollar that they earn
with respect to what they can purchase
and the lives they can live.
That means more time with your family,
more time with your friends,
more time to explore interesting opportunities. So, you know, we've been through this conversation
a number of times. I know I'm not. No, it's important to bring it up, I think, and really
unpack it because the fear is peaking now, Sachs. People are using this moment in time
to scare people that, hey, the jobs are going to go away and they won't come back. But what
we're seeing on the ground, Sachs, is I'm seeing many more startups getting created
and able to accomplish more tasks
and hit a higher revenue per employee
than they did in the last two cycles.
So it used to be, you know,
you try to get to a quarter million
in revenue per employee than 500.
Now we're regularly seeing startups hit a million dollars
in revenue per employee,
something that was rarefied air previously,
which then speaks to your point, Freeberg,
that there'll be more
abundance. There'll be more capital generated, more capital deployed. Because yes, more capital
deployed for more opportunities, but you're going to need to be more resilient, I think.
Yeah. I think it's actually very hard to completely eliminate a human job. The ones that you cited,
and J. Cal, you keep citing the same ones because I actually don't think there's that many that fit in this category, the drivers and maybe level on customer support
because those jobs are so monolithic.
But when you think about even like what a salesperson does, right?
It's like, yes, they spend a lot of time with prospects, but they also spend time negotiating
contracts and they spend time doing post-sale implementation and follow-up and they spend
time learning the product and giving feedback. I mean it's a
multifaceted job and you can use AI to automate pieces of it but to eliminate
the whole job is actually very hard and so I just think this idea that boom 20%
of the workforce is going to be unemployed in two years I just don't
think that it's gonna work that way but look if there is widespread job disruption, then obviously the government's going to have to
react and we're going to be in a very different societal order. But my point is you want the
government to start reacting now before this actually happens.
Yeah, we don't need to be precox and predict it. Yeah.
It's a total power grab. It's a total power grab to give the government and these organizations
more power before the
risk has even manifested.
And let me say this as well, with respect to all these regulations that were created,
the 100 page Biden EO and the 200 page diffusion rule, none of these regulations solve the
ex-risk problem.
None of these things actually would prevent the most existential risks that we're talking
about.
I'm happy-
They don't solve for alignment.
They don't sign for the kill switch. None of that.
Yeah. When someone actually figures out how to solve that problem, I'm all ears. Look,
I'm not cavalier about these risks. I understand that they exist, but I'm not in favor of the
fear-mongering. I'm not in favor of giving all this power to the government before we even know
how to solve these problems. Shamath, you did a tweet about entry level jobs being toast.
So I think there is a nuance here
and both parties could be correct.
I think the job destruction is happening as we speak.
I'll just give one example and then drop to you Chamath.
One job in startups that's not driving a car
or a super entry level was people would hire consultants
to do recruitment and to write job descriptions.
Now, I was at a journalist night talking to a bunch of founders here in Singapore and I said, how
many people have used AI to write a job description? Everybody's hand went up. I said, how many
of you with that job description, was that job description better than you could have
written or any consultant? And they all said, yes, 100% AI is better at that job. That was
a job, a high level HR recruitment job or an aspect of it, Zach. So that was half the job, a third of the job, to your point.
The chores are being automated.
So I do think we're going to see entry level jobs, Chamath.
The ones that get people into an organization,
maybe they're going away.
And was that your point of your tweet,
which we'll pull up right here?
If a GPT is a glorified autocomplete,
how did we used to do glorified autocomplete in
the past? It was with new grads. New grads were our autocomplete. And to your point,
the models are good enough that it effectively allows a person to rise in their career without
the need of new grad grist for the mill, so to speak. So I think the
reason why companies aren't hiring nearly as many new grads is that the folks that are already in
a company can do more work with these tools. And I think that that's a very good thing. So you're
generally going to see OPEX as a percentage of revenue shrink naturally, and you're going to generally see revenue
per employee go up naturally.
But it's going to create a tough job market for new grads in the established organizations.
And so what should new grads do?
They should probably steep themselves in the tools and go to younger companies or start
a company.
I think that's the only solution for them.
Bingo.
The most important thing for whether there are jobs
available for new grads or not is whether the economy
is booming.
So obviously in the wake of a financial crisis,
the jobs dry up because everyone's cost cutting
and those jobs are the first ones to get cut.
But if the economy is booming,
then there's gonna be a lot more job creation.
And so again, if AI is this driver and enabler of tremendous productivity, that's going to
be good for economic growth.
And I think that that will lead to more company formation, more company expansion at the same
time that you're getting more productivity.
Now to give an example, one of the things I see a lot discussed online about these coding assistants is that they
make junior programmers much better because you know if you're already like a 10x programmer,
very experienced, you already knew how to do everything and you could argue that the people
who benefit the most are the entry-level coders who are willing to now embrace the new technology
and it makes them much more productive.
So in other words, it's a huge leveler
and it takes an entry level coder
and makes them 5X or 10X better.
So look, this is an argument I see online.
The point is just, I don't think we know how this cuts yet.
I agree.
And I just think there's like this,
this doomerism is premature and it's not a coincidence
that it's being funded and motivated by this hardcore
ideological element.
I'll tell you my hiring experience.
We have about 30 people at 80-90 and the way that I have found it to work the best is you
have senior people act as mentors and then you have an overwhelming corpus of young,
very talented people who are AI native.
And if you don't find that mix, what you have instead are L7s from Google and Amazon and Meta
who come to you with extremely high salary demands and stock demands,
and they just don't thrive.
And part of why they don't thrive is that they push back on the tools and how you use them. They push back on all these things that the tools help you get to
faster. This is why I think it's so important for young folks to just jump in with two feet
and be AI native from the jump because you're much more hireable, frankly, to the emergent
company. And the bigger companies, you'll have a lot of these folks that see the writing
on the wall,
may not want to adapt as fast as otherwise. Another way, for example, that you can measure this is if you look inside your company
on the productivity lift of some of these coding assistants for people as a distribution of age,
what you'll see is the younger people leverage it way more and have way more productivity than older folks.
And I'm not saying that as an ageist comment. I'm saying that it's an actual reflection of how people are reacting to these
tools. What you're describing is a paradigm shift. It is a big leap. It's like when I went to college,
when I took computer science, it was object-oriented programming. It was like C++. It
was compiled languages. It was gnarly. It was nasty work. And then you had these high level abstracted languages.
And I used to remember at Facebook, I would just get so annoyed because I was
like, why is everybody using PHP and Python?
This is like not even real.
But I was one of these old light lights who didn't understand that I
just had to take the leap.
And what it did was it grew the top of the funnel of the number of developers
by 10 X and as a result, what you had were all of these
advancements for the internet.
And I think what's happening right now is akin
to the same thing, where you're gonna grow the number
of developers upstream by 10x, but in order to embrace that,
you just have to jump in with two feet.
And if you're very rigid in how you think a job should be
done technically, I think you're just gonna get left behind.
Just a little interesting statistic there.
Microsoft announced 6,000 job layoffs, about 3% of their workforce, while putting up record
profits while being in an incredible cash position.
That would be something-
I mean, it's like total confirmation bias.
It's like now every time there's a layoff announcement, people try to tie it to AI to
feed this doomer story.
I don't think that's an AI story.
Well, I actually think it-
I don't think it's an AI story.
I think it's just-
I think it is because the people they're eliminating
are management and I think the management layer
becomes less necessary in the world.
It was entry level employees,
now you're saying it's management.
This is total confirmation bias.
No, no, I think those are two areas
that specifically get eliminated.
Entry level, it's too hard to give them the grunt work
and then for the managers who are old
and have been there for 20 years.
Hold on, let me finish.
For those people, I think they are unnecessary
in this new AI monitoring world.
AI can't do management.
What are you talking about?
What is the AI agent that's doing management right now
in companies?
This theory doesn't even make sense.
Oh no, it totally does.
There are tools now that are telling you,
these are the most productive people in the
organization.
Tramoth just outlined who's shipping the most, et cetera, who's using the tools.
And then people are saying, well, why do we have all these highly priced people who are
not actually shipping code, who are L7s, et cetera?
You're totally falling for some sort of narrative here.
This makes no sense.
I don't think I am.
Yeah.
Let me be very clear what I'm saying.
What I am saying is AI natives are extremely productive.
They use these tools, they're very facile with them.
I think it's very reductive, but what you see is
the older or more established in your career you are
in technical roles, what I see is that it's harder
and harder for folks like that to embrace these tools
in the same way.
Now, how does it play out in terms of jobs?
I think that just these tools are good enough
where the net new incremental task-oriented role
that would typically go to a new grad,
a lot of that can be defrayed by these models.
That's what I'm saying very clear, specifically.
And I don't think that speaks to management.
I agree with Saxe.
It has nothing to do with management.
But Sergey said, Freberg, when he came to our F1, that management would be
the first thing to go. I was talking to some entrepreneurs last night, again, here in Singapore,
and they are taking all the GitHub and Injura cards and things that have been submitted,
plus all the Slack messages in their organization, and they're putting them into an LLM and having
it write management reports of who is the most productive
in the organization, and in the new version of Windows,
it's monitoring your entire desktop freeburg. Management
is going to know who in the organization is actually doing work,
what work they're doing, and what the result
of that work is through AI. That is the future
of management, and you take out all bias, all,
you know, loyalty, and the AI is going future of management. And you take out all bias, all loyalty,
and the AI is going to do that.
I couldn't disagree with you more, Sax, on that.
But, Freeberg, do you want to wrap this up here on this topic?
My point is that managers are not losing their job
because AI is replacing them.
I didn't say that AI wouldn't be a valuable tool
for managers to use.
Sure, AI will be a great tool for managers,
but we're
not anywhere near the point where managerial jobs are being eliminated because they're
getting replaced by AI agents. We're still at the chatbot stage of this.
Literally, Sergey said he took their internal Slack, went into a dev conversation, said,
who are the underrated people in this organization who deserve a raise, and it gave him the right
answer. Great. That doesn't allow you to cut 6,000 people.
I think it's happening as we speak.
It's just not over.
You fell for this narrative.
You grasped onto this Microsoft restructuring
where they eliminated 6,000 roles
and you're trying to attribute that to AI now.
I think it has to do with AI.
I think management is looking at it saying,
we are going to replace these positions with AI.
We might as well get rid of them now.
It is in flux. We'll see who's right in the coming months.
Can I make another comment?
Freeberg, wrap this up here so we can get on to the next topic. This is a great topic.
I want to make one last point, which I think, and Sachs, you may not appreciate this, so
we can have a healthy argument about this. I think in the same way that all of these
jobs are going to get lost to AI fear mongering, there's a similar
narrative that I think is a false narrative around there's a
race in AI that's underway between nation states. And the
reason I think it's false is if I asked you guys the question
who won the Industrial Revolution, the Industrial
Revolution benefited everyone around the world, there are
factories, and there's a continuous effort and continuous improvement in manufacturing
processes worldwide.
That is a continuation of that revolution.
Similar, if I asked who won the internet race, there were businesses built out of the US,
businesses built out of China, businesses built out of India and Europe that have all
created value for shareholders, created value for consumers, changed the world, et cetera.
And I think the same is going to happen in AI. I don't think that there's a finish line in AI. I think AI is a new paradigm of work, a new paradigm of productivity, a new paradigm of business, of the economy of livelihoods of pretty much everything. Every interaction humans have with ourselves and the world around us will have in its substrate AI.
And as a result,
I think it's gonna be this continuous process of improvement.
So I'm not sure, look, there are different models
and you can look at the performance metrics of models,
but you can get yourself spun up into a tizzy
over which model is ahead of the others,
which one's gonna quote, get to the finish line first.
But I think at the end of the day,
the abundance and the economic prosperity that will arise
From the continuous performance improvements that come out of AI and AI development will benefit all nation states and actually could lead to a little bit more of a
less resource Constrained world where we're all fighting over limited resources and there's nation-state
definitions around who has access to what and perhaps more abundance which means more peace and
Less of this kind of resource driven world.
So your thought on the kumbaya theory, which was by free bird.
Yeah, exactly.
I, I'll partially agree in the sense that I don't think the AI race is a finite game.
It's an infinite game.
I agree that there's no finish line, but that doesn't mean there's not a race going on.
For example, an arms race would be a classic example of a competition between countries
to see who is stronger to basically amass power.
They might be neutralizing each other.
The balance of power may stay in equilibrium even though both sides feel the need to constantly
up-level their arms, their power. Yeah.
And so I think that to use the term that Mir Shamer used
at the All In Summit, we are in an iron cage.
The US and China are the two leading countries in the world,
economically, militarily, technologically.
They both care about their survival.
The best way to ensure your survival in a self-help world
is by being the most powerful.
And so these are great powers who care a lot
about the balance of power.
And they will compete vigorously with each other
to maintain the greatest balance of power between them.
And high tech is a major dimension of that competition.
And within high tech, AI is the most important field.
So look, there is gonna be an intense competition around AI. Now, the question is, how does that end up?
I mean, it could end up in a tie or in, it could end up in a situation where both countries benefit,
maybe open source wins, maybe neither side gains a decisive advantage, but they're absolutely
going to compete because neither one can afford to take the risk
that the other one will develop a decisive advantage. Prisoners dilemma. Nuclear proliferation
is a good analogy. I would argue nuclear deterrence led to a more peaceful world in the 20th century.
I mean, is that fair to say, Sachs, that ultimately... Well, what happened with nuclear is that the
actual underlying technology hit an asymptote. It plateaued, right?
And so we ended up in a situation where,
in the case of the United States versus Soviet Union,
where both sides had enough nukes
to blow up the world many times over,
and there wasn't really that much more to innovate.
So the underlying technological competition had ended,
the dynamic was more stable,
and they were able to reach an arms control framework to sort of control the arms race, right? I think AI is a little
different. We're in a situation right now where the technology is changing very
very rapidly and it's potentially on some sort of exponential curve and so
therefore being a year ahead even six months ahead could result in a major
advantage. I think under those conditions both both sides are gonna feel the need to compete
very vigorously.
I don't think they can sign up.
But this is a system of productivity, right?
For an agreement to slow each other down.
I just don't think-
But nuclear was not a system of productivity.
It was not a system of economic growth.
It was a system of literally destruction.
And this is quite different.
This is a system of making more with less,
which unleashes benefits to everyone in a way that perhaps
should be calming down the conflict.
You've got to admit that there is a potential dual use here.
There's no question that the armies of the future are going to be drones and robots and they're going to be AI powered.
Yeah.
And as long as that's the case, these countries are going to compete vigorously to have the best AI and they're going to want their
leaders or national champions or starps and soously to have the best AI. And they're going to want their leaders,
their national champions, their starps and so forth
to win the race.
What's the worst case, Sax, if China wins the AI race?
What is the worst case scenario?
Ask what it means first.
Ask Sax what it means.
That's literally what I'm asking.
Like, what would that scenario be?
Would they invade America and they dominate us forever?
And what does it mean to lead?
Yeah, what does it mean to win? Yeah, what does it mean to win?
To me, it would mean that they achieve a decisive advantage
in AI such that we can't leapfrog them back.
And an example of this might be something like 5G
where Huawei somehow leapfrogged us,
got to 5G first and disseminated it through the world.
They weren't concerned about diffusion. They were interested in promulgating their it through the world. They weren't concerned about diffusion. They
were interested in promulgating their technology throughout the world.
So if the Chinese win AI, they will sell more products and services around the globe than
the US.
This is where we have to change our mindset towards diffusion. I would define winning
as the whole world consolidates around the American tech stack. They use American hardware
in data centers that, again, are
formerly powered by American technology. And, you know, just look at market share, okay? If we have
like 80 to 90 percent market share, that's winning. If they have 80 percent market share, then we're
in big trouble. So it's very simple. It means like... Yeah, but if the market grows up by 10x,
it doesn't matter because the world will have, every individual in every country will now have more, they will have a more
prosperous life. And as a result, it's not necessarily the framing about,
if we don't get there first, we are necessarily going to lose.
I get that there's an edge case of conflict or, or what have you,
but I do think that there's a net benefit where the whole world suddenly is in
this more prosperous state. And you know,
this is a classic example of a dual use technology
where there are both economic benefits
and there are military benefits.
Yes. GPS would come to mind in this example, right?
Like my summary point is just that it's not all about
a losing game with respect to this quote race
with other nation states.
But at the end of the day, yes, there is risk.
But I do think that if the
pace of improvement stays on track, like it is right now, holy shit, I think we're in a pretty
good place. That's just my point. Okay, some positivity. Okay. Look, I hope that the AI race
stays entirely positive and it's a healthy competition between nations and the competition
spurs them on to develop more prosperity for their citizens. But as we talked about
the AI summit, there's two ways of looking at the world. There's kind of the economist way that Jeffrey Sachs was talking about, and then there's the balance of power way, a realist way,
which Mirschaumer was talking about. And when economic prosperity and survival or balance of
power come into conflict, it's the realist view of the world that it's the balance of power come into conflict. It's the realest view of the world
that it's the balance of power that gets privileged.
And I just think that's the way that governments operate
is that prosperity is incredibly important.
We want economic success,
but power is ultimately privileged over that.
And this is why we're gonna compete vigorously in high tech.
That's why there is gonna be an AI race.
Okay, perfect segue. We should talk a little bit about what was the topic of discussion.
Yesterday, I had a lunch with a bunch of family offices and capital allocators,
government folks here in Singapore, and they were talking about our discussion last week
about the big, beautiful bill and the debt here in the United States. It's permeating everywhere. The two conversations
at every stop I've made here is the big, beautiful bill and the balance sheet of the United States,
as well as tariffs. So we need to maybe revisit our discussion last week. Chamath you had
in Freiburg did an impromptu call with Ron Johnson over the weekend, which then spurred
him going on 20 other podcasts
to talk about this.
Stephen Miller from the administration
has been tweeting some corrections
or his perceived corrections about the bill.
And Saxa, I think you've also started tweeting this.
Where do we want to start?
Maybe, Chamath, you-
Well, I think there are just a couple of facts
that should be cleaned up because-
Okay, so facts from the administration, their view of our discussion.
Well, even though I was defending the bill last week on the whole, I wasn't saying it
was perfect.
I was just saying it was better than the status quo.
Yeah, you were clear about that.
Yeah.
But even I, in doing that, was conceding some points that I think were just factually wrong.
And the big one was that I said I was disappointed that
the Doge cuts weren't included in the big, beautiful bill.
What Stephen Miller's pointed out is that reconciliation bills
can only deal with what's called mandatory spending.
They can't deal with what's called discretionary spending.
And since the Doge cuts apply to discretionary spending,
they just can't be dealt with in a reconciliation bill.
They have to be dealt with separately. There can be a separate rescission bill that comes up,
but it can't be dealt with in this bill. And just to be very clear, look, if the doge cuts
don't happen through rescission, I'm going to be very disappointed in that. I really want the doge
cuts to happen, but it's just a fact that the doge cuts cannot happen in the big beautiful bill.
It's not that kind of bill. And I think it's therefore wrong to blame Big Beautiful bill
for not containing Doge cuts
when the Senate rules don't allow that.
It all goes back to the bird rules.
There are only specific things
that can be dealt with through reconciliation,
which is this 50 vote threshold.
And it has to be quote unquote mandatory spending.
Discretionary cuts are dealt
with in annual appropriations bills that require 60 votes. Now look, this is kind of a crazy system.
I don't know exactly how it evolved. I guess Robert Byrd is the one who came out with all this stuff
and maybe they need to change the system, but it's just wrong to blame the big beautiful bill for
not containing the Doge cuts. That's just a fact. So the other thing is that the BBB does actually cut spending. It's
just not scored that way because when the bill removes the sunset provision from the 2017 tax
cuts, the CBO ends up scoring that as effectively a spending increase. But tax rates are simply
continuing at their current level. In other words, at this year's level. If you used the current year as your baseline and then compared it to spending next year,
it would score as a cut in spending. It's not correct to say the bill increases spending.
It does actually result in a mandatory spending cut, but it's not getting credit for that
because we're continuing the tax rates
at the current year's rates.
Do you believe, Sax, that this administration,
which you are part of, in four years will have spent,
will have balanced the budget?
Will it have reduced the deficit,
or will the deficit continue to grow at two trillion a year?
What is your belief?
Because there's a lot of strategies going on here. Yeah. My belief is that President Trump came into office inheriting a terrible fiscal situation.
I mean, basically-
That he created and that Biden created. They both put a trillion on the debt.
That's just a fact.
It's a big difference. It's a big difference to add to the deficit when you're in the emergency
phase of COVID. And there's emergency spending. It's emergency spending. It was never supposed to be
permanent. And then somehow Biden made it permanent. And he wanted a lot more. Remember,
build back better. He wanted a lot more. So, you know, it's tough when you come into office with a
what is $2 trillion annual deficit. So to my original question. Now look, hold on. Would I
like to see the deficit eliminated in one year?
Yeah, absolutely.
But there's just not the votes for that.
Well, I asked you for four years.
There's a one vote margin here in the House, and the Democrats aren't cooperating in any
way.
So I think that the administration is getting the most done that it can.
This is a mandatory spending cut.
And I think the Doge cuts will be dealt with hopefully through rescission
in a subsequent bill. I'm asking you about four years from now, will we be sitting here in four years? Will Trump have cut spending by the end of this term in another three and a half years?
Will we be looking at a balanced budget? Potentially? Is that the goal of the
administration? Or will we be at $42, $44, $45 trillion at the end of Trump's second term, David Sasse?
Listen, if you want that level of specificity,
you're going to have to get Scott Besson on, okay?
This is just not my area.
I'm not going to pretend to have that level of detailed answers.
But what I believe is that
the Trump administration's policy is to spur growth.
I think that these tax policies will spur growth.
I think that AI will also be a huge tailwind. It'll be a
productivity boost. I think let's stop being do-mers about it. We need that productivity boost.
And I think that the net result of those things will be to improve the fiscal situation.
Do I want more spending cuts? Yeah, but look, we're getting more than was represented last week.
Let's put it that way. Okay. Fair enough, Sax. Thank you for the cleanup there. Chamath, our
bestie, Elon, was on the Sunday shows and he said, hey, the bill can be big or it can be beautiful.
It can't be both. He seems to be, I'll say, displeased or maybe not as optimistic about
balancing the budget and getting spending under control. But he still believes in Doge,
obviously, and hopefully Doge continues. You seemed a little bit concerned last week.
A week's passed.
You've heard some of Stephen Miller's opinions.
Where do you net out seven days from our big, beautiful budget bill debate last week?
A week later?
Well, I mean, I think Stephen's critique of how the media summarized the reaction to the bill is accurate.
And I think it's probably useful to double-click into one thing that Sachs didn't mention but that
Stephen did. A lot of this pivots around the CBO, which is the Congressional Budget Office,
and how they look at these bills. And there's a lot of issues with how they do it. In one specific case,
which Sachs just mentioned and Stephen talked about is that they have these arcane rules about the way
that they score things. And what they were assuming is that the tax rates would flip back to what they were
before the first Trump tax cuts,
which obviously would be higher than where they are today.
What that would mean in their financial model
is we were gonna get all that money.
Now to maintain the tax cuts where we are,
they now then would look at that and say,
oh, hold on, that's a loss of revenue.
Why are all of these things important?
I downloaded the CBO model, went through it,
and what I would say is at best it's Spartan.
Which means that I don't think a financial analyst
or somebody that controls a lot of money will actually put a lot of stock in their model. I think
what you'll have happen is people will build their own
versions, bottoms up.
Do you trust it? The CBO is version of this? Or do you
largely trust it?
I don't think the CBO really knows what's going on, to be
totally honest with you.
Okay.
I think that there are parts of what they do, which they're also opaque on. Nick, I sent you
a tweet from Goldman Sachs. So here's what Goldman put out. Now, the point is when you build a model,
what you're trying to do is net out all of these bars. Okay, you're trying to add the positive bars
and the negative bars and you figure out what is the total number at the end of it. Now, in order
to do that, when
you see the bars on the far right, that's a 2034 dollar. That's very different than a 2025 dollar.
The CBO doesn't disclose how they deal with that. They don't disclose the discount rate. So you can
question what that is. The CBO makes these assumptions that, you know, as Stephen pointed
out, are very brittle with respect to the tax plan. That's not factored in
here. So those are the issues with the way the CBO scores it.
So you have to do it yourself. Now, Peter Navarro published an
article, which I think is probably the most pivotal
article about this whole topic.
Peter Navarro of Tariff fame. Yeah.
Yeah, here, I think he nails it right in the bullseye, which is the bond market needs to make a
decision on one very critical assumption when they build their own model. Okay, so let's ignore the
CBOs kind of brittle math and the Excel that they post on their website. People are going to do
their own because they're talking about managing their own money. But Navarro basically points to the critical thing, which is listen, those CBO assumptions also
include a fatal error, which is they assume these very low levels of GDP. What you're probably going
to see in Q2 is a really hot GDP print. If I'm a betting man, which I am, I think the GDP prints
going to come in above three, not quite four, but above three.
And so what Peter is saying here is, hey guys, like you're estimating 1.7% GDP.
Why don't you assume 2.2 or why don't you assume 2.7 or any number?
Or really what he's saying is why don't you build a sensitivity so that you can see the
implications of that?
And I think that that is a very important point.
Okay. So where do I net out a week later, Jason? It's pretty much summarized in the tweet that I
posted earlier today. So over the last week, as people have digested it, I think that there are small actors in this play and big actors. The biggest actor is obviously President Trump,
but the second biggest actor is the long end of the bond market. These are the central bankers, the long bond holders,
and these macro hedge funds. Why? Because they will ultimately determine the United States' cost
of capital. How expensive will it be to finance our deficits? Irrespective of whatever the number
is, it could be a dollar or it could be a trillion dollars, that doesn't matter right now.
The point is, what is going to be our cost of capital?
And what's happened over the last little while
is that they've steepened the curve
and they've made it more expensive for us to borrow money.
That's just the fact.
So how do we get in front of this?
I think the most important thing,
if you think about what Peter
Navarro said is this plan and the bill can work if we get the GDP right. Okay. So how
do you get the GDP right? And this is where I have one very narrow set of things that
I think we need to improve. And the specific thing that I'll go back to is today,
America is at a supply demand trade-off on the energy side.
What does that mean?
We literally consume every single bit of energy
that we make.
We don't have slack in the system.
We are growing our energy demands on average
about 3% a year.
So I think the most critical thing we need to do
is to make sure the energy markets stay robust,
meaning there's a lot of investment that people are making.
On Tuesday, I announced a deal that I did,
building a one gigawatt data center in Arizona.
This is a lot of money.
This is little old me, but there are
lots of people ripping in huge, huge, huge checks, hundreds of
billions of dollars. I think the sole focus has to be to make
sure that the energy policy of America is robust, and it keeps
all the electrons online. If there's any contraction, I think
it'll hit the GDP number, because we won't have the energy we need. And that's where things start
to get a little funky. So I think where I am is, I think
President Trump should get what he wants. I think the bill can
work narrowly address the energy provisions. And I think we live
to fight another day.
So Friedberg, cynical approach might be we're working the refs
here, the CBO is not taking into gdp this gdp has a magical unicorn in it i am energy is gonna spur this amazing growth.
What the bond markets don't believe it either so are we looking at just.
a party, I'll put the administration aside, that is just as recklessly spending as the Democrats, and they want to change the formula by which they're judged in the future that there's going to be
magically all this growth and growth solves all problems. And what we really need to do,
to your point, I think two weeks ago, that this is just disgraceful to put up this much spending,
and we have to have austerity and we need to increase maybe the discipline in the country, and both parties have to be part of that. I'm asking you
from the cynical perspective maybe to represent or still me on the other side here.
We had a conversation with Senator Ron Johnson after we recorded the pod last week,
and he was very clear in a key point, which is that this bill addresses mandatory spending.
Just to give you a sense, 70% of our federal budget is mandatory spending.
30% falls into that discretionary category. The mandatory spending is composed of the interest
on the debt, which is now well over a trillion dollars a year on its way to a trillion five,
almost a trillion a year, Medicare, Medicaid, Social Security,
and some other income security programs.
And as Ron Johnson shared with us,
over the years, more and more programs have been put
into the mandatory spending category.
And so you can get past the filibustering in the Senate
to be able to get budget adjustments done.
The key thing he's focused on and Rand Paul is focused on. And I've talked
about is the spending level of our mandatory programs. The big
beautiful bill proposes a roughly $70 billion per year cut
in Medicaid. Okay, and that sounds awful. How could you do
that to people in 2019? the year before COVID, Medicaid spending was $627 billion.
2024, it was $914 billion.
So the $70 billion cut gets you down to 840.
You're still roughly, call it 40% above
where you were in 2019.
So is that the right level?
And fundamentally, the opportunity
to cut those mandatory programs,
which I know sounds awful to
cut Social Security and cut Medicaid, but the reality is they're not just being cut from a low
level, they're being cut from a level that's 60 plus percent higher than they were in 2019.
I gave you another example, which is the SNAP program, the food stamp program. Again,
$15 billion of the 120 a year
that we spend on food stamps is being used to buy soda.
And a whole another chunk of that 120 is being used
to buy other junk food.
So they have proposed in this bill to cut SNAP down to 90.
And it was 60 in 2019.
So it's still 50% above where it was in 2019.
So the key point that's being made by Ron Johnson
and others is that the spending on these mandatory programs, above where it was in 2019. So the key point that's being made by Ron Johnson
and others is that the spending on these mandatory programs,
which account for nearly three quarters
of our federal budget are still very elevated
relative to where we were in 2019.
And we are not gonna get out of our deficit
barring a massive increase in GDP
without changes to the spending level.
Now, I don't put the blame on the White House.
This bill passed with one vote in the House, one vote.
And so a key point to note,
and I've said this from day one,
and every time I've gone to DC
and every time we've talked about Doge,
I've said there's no way any of this stuff's gonna change
without legislative action from the Congress.
And here we are seeing Congress for whatever reason,
you can listen to Ron Johnson, you can Congress for whatever reason, you can listen to
Ron Johnson, you can listen to Rand Paul, you can listen to others say, you know what, we can't cut
that deep, it is going to be too harmful to our constituents, we need to keep the programs at
their current levels, or make no changes at all, or only modest changes. And that's where we are,
that's the reality. Now, I do think that Navarro did an excellent job in his op ed for
whatever criticism we may want to lay on Navarro for many other things. He pointed out that the
CBO projections in 2017 for the next year's GDP growth numbers was 1.8 to 2%. And it actually
came in at 2.9%, a full one point higher because of the tax and jobs act that was passed by the
Trump administration in 2017. So the additional money
that goes into investments because lower taxes are being
paid fueled GDP growth. This is what some people call trickle
down economics. People ridicule it, they say it doesn't work.
It's not real. But in this particular instance, they cut
taxes and the GDP grew much faster than was projected or
estimated by the economists
at the CBO. So the argument that's being made is that we are not capturing many of the upsides
in the GDP numbers that are being projected. And I will be honest about this. I don't think
anyone knows how much the GDP is going to grow. We don't know the economic benefit and
effects of AI. We don't know the economic benefit and effects of AI. We don't know the economic benefits and effects of
the work that's being done to deregulate. Another key point,
which is not talked about by Navarro or anywhere else, there's
a broad effort to deregulate standing up new energy systems,
deregulate industry and pharma, deregulate banking, Besen
talked about this in our interview with him, all of those
deregulatory actions, theoretically, should drive more
investment dollars. Because if you can get a biotech drug to
market in five years instead of 10, you'll invest more in
developing new biotech drugs. If you can stand up a new nuclear
reactor in seven years instead of 30, you'll build more nuclear
reactors, money will flow. If you can get
a new factory working because it's a lot easier and faster to build the factory and cheaper,
you'll build more factories and production will go up.
People were really taken by the way by your comment that you would shut up about the deficit
if we had like a really great energy policy, we were dumping a lot on top of it.
I want to build on the point that both Jamatha and Friedberg
made about growth rates.
So there's a very important chart here from Fred.
This is the Federal Reserve of St. Louis.
This is federal receipts.
So basically, it's federal tax revenue as a percent of GDP.
And this goes all the way back to the 1930s, 1940s.
So if you look in the post-World War II period,
you can see, just eyeballing it
that there's a lot of variation around this, but the line is around 17.5%
plus or minus 2%. And the interesting thing is that this chart reflects
radically different tax rates. So for example, during some of these periods we've had 90% top marginal tax rates,
we had 70% top marginal tax rates. So yeah, under Jimmy Carter, the top marginal tax rate was I think 70%.
We've had tax rates under Reagan or Clinton in the 20s. So the point is that the the tax rate that you have and what you actually collect as a percent of GDP
Don't correlate the most important thing by far. It's just how the economy is doing if you look at the top tick
It's around 2000 there. If you just mouse over it
Yeah, we get like just under 20% of federal receipts of percent of GDP and
Tax rates were quite low back then the reason why is we had an economic boom.
So look, the point is the most important thing in terms of tax revenue is having a good economy.
And this is why you don't just want to have very high tax rates because they clobber your
economy.
So this point that Navarro was making in that article, it actually makes sense.
I mean, 1.7 percent is a pretty tepid growth assumption.
We should be able to grow a lot faster.
And if we have a favorable tax policy,
you can grow a lot faster.
Now, if you go to spending,
can you pull up the Fred chart on spending?
What you see here is that, I mean, it's been kind of going up,
but let's say that since the mid 1970s or so,
federal net outlays as a percent of GDP,
so basically spending, was
around 20% of GDP.
And then what happened is during COVID, it went crazy, went all the way up to 30%, and
now it's back down to low 20s, but it's still not back down to 20.
And what we need to do is grow the economy, we have to grow GDP to the point where federal
net outlays are back around 20%.
If you could get tax revenue to the historical mean of around 17.5% or 17%, you get spending to 20%,
then you have a budget deficit of 3%, which is much more tolerable.
And I think that's Besten's target under his 333 plan, right?
Is you get GDP growth back up to 3% and you get the budget deficit down to 3%.
All right, Chamath, you had some charts you wanted to share.
Well, I think what's amazing is if you take last week and now again this week,
we're all converging on the same thing. The path out of this is through GDP growth.
is through GDP growth.
And I just want everybody to understand where we are.
And this is without judgment, this is just the facts. What this chart shows in gray is the total supply
of power in the United States.
And the blue line is the utilization.
So what you build for is what you think is a premium above the demand, right?
You'd say if there's one unit of demand, let's have 1.2 units of supply, we'll be okay.
But as it turns out historically in the United States,
we've had these cycles where we didn't really know what the demand curve would look like. And so over the last number of years,
we've stopped really building supply in power. But what
happened with things like AI and all of these other things is
that the demand just continued to spike. And so what this
chart shows is we are at a standstill sitting here today in
2025. On margin, we're actually short power, which is to say,
sometimes there are brownouts, sometimes there's lack of power,
because we didn't add enough capacity. So that's where we are
today. So then we talk about all of these new kinds of energy.
And this is just meant to ground us in the facts. If you tried to turn on a project today,
sitting here in May of 2025,
here's what the timelines are.
We all talk about SMRs, small modular reactors.
The reality is that if you get everything permitted
and you believe the technology can be de-risked,
you're still in a 2035 plus timeframe.
You're a decade away.
If you have an unplanned Nat gas plant,
today the fastest you could get that on
is four years from now.
If we tried to restart a mothballed nuclear reactor,
of which there are only three we can restart, that's a 2027 to 2030
timeframe. So let's give us the benefit of the doubt. That's two years away.
If we needed to plan that gas plant, there's already 24 gigawatts in the queue, which can't
get turned on. So where does this end up? And this is where I think we need to strip away all the partisanship and understand what we're dealing with. We have
ready supply of renewable and storage options today. It's the
fastest thing that you can turn on. It allows us to turn on
supply to meet the demand and utilization. So I just think
it's important to understand that we must not lose energy, we cannot lose the
energy market, because that is the critical driver of all the
GDP.
All right, Nippon Steel and the US Steel merger got cleared by
President Trump. This was something that was being blocked
by Biden, obviously, for national security reasons. Nippon is
going to acquire your steel 14.9 billion. Biden blocked that, as
we had
discussed. On Friday, Trump cleared the deal to go through calling it a partnership that
will create 70,000 jobs in the US. And on Sunday, Trump called the deal an investment
sink. It's a partial ownership, but it will be controlled by the USA. Trump, there seems
to be a reframing of this deal and that the United States is going to benefit from it,
but it's not a sale. Let's let's set some context. the United States is going to benefit from it, but it's not a sale.
Let's, let's set some context.
It's an investment.
Yeah.
Let's set some context.
The United States is always on the wrong side of these deals.
Okay.
We've been on the wrong side for 20 years, meaning we show up when an asset is stranded
or completely run into the ground.
For example, we did the auto bailouts at the end of the great financial crisis. If it's not a company
and there's toxic assets, we set up something called TARP. What do we get? Not much in return.
In this, it's the opposite. And I think that this strategy has worked for many other countries
really well. So if you look at Brazil, companies like Embraer and Vale, which are really big Brazilian national champions,
have a partnership, a pretty tight coupling
with the Brazilian government.
The Brazilians have a golden vote.
If you look inside of the UK,
there's a bunch of aerospace and defense companies,
including Rolls-Royce, that have a very tight coupling
with the UK government, they have a golden vote.
If you look in China, companies like ByteDance and CATL
have a very tight coupling with the Chinese government and the Chinese government has a
golden vote. And so what are all of those deals? Those deals are about companies that are thriving
and on the forward foot. And so I think this is a really important example of things that we need
to copy. I've said this before, but one part of China
that I think we need to pay very close attention to
is Hu Jintao in 2003 laid out a plan.
And he said, we are going to create 10 national champions
in China, in all the critical industries that
are going to matter for the next 50 years,
including things like batteries and rare earths and AI.
And they did it.
But for those companies that allowed them to thrive
and crush it, and I think that we need to do that
and compete with those folks on an equal playing field.
In all industries or in very specific strategic ones?
Because that would seem like corrupting capitalism
in free markets would be the steel man.
There's 10 industries that matter.
Give us some of them. Steel is one.
I think the precursors for pharmaceuticals
are absolutely critical.
I think AI is absolutely critical.
I think the upstream lithography and EV deposition and chip
making capability, absolutely critical.
I think batteries are absolutely critical and I think rare earths
and the specialty chemical supply chain absolutely critical. If you have those five, you are in
control of your own destiny in the sense that you can keep your citizens healthy and you can make
all the stuff for the future. So I think if the president is creating a more expansive idea beyond US deal with this idea of US support, maybe there'll be preferred capital in the future to US deal. But if he creates a category by category thing across five or six of these critical areas of the future, I think it's super smart and we should do more of it.
super smart and we should do more of it.
Sax, what do you think?
Interventionism, putting your thumb on the scale, golden votes, a good idea for America in very narrow verticals or let the free market decide what your thoughts
on this golden vote, having a board seat, et cetera.
Well, it depends what the free market, so to speak, produced.
And the reality is over the past 25 years is we exported a lot of this
manufacturing capacity to China
And I don't think it was a free market because they had all these advantages under the WTO that we talked about in a previous podcast
They were able to subsidize their national champions
while still remaining compliant with the WTO rules because supposedly they were a developing country it was totally unfair and
What they would do is through these subsidies, they would allow these national champions
to essentially dump their products in the global market
and drive everyone else out of business.
They became the low cost producers.
I think that as the president just said recently,
not every industry has to be treated as strategic.
Clothes and toys, we don't necessarily have to reach
or in the United States,
but steel production is definitely strategic. Steel, aluminum, and I'd say the rare earths, we have to reshore in the United States, but steel production is definitely strategic.
Steel, aluminum, and I'd say the rare earths, we have to have that capacity.
We cannot be completely dependent on China for our supply chain.
So some of these industries have to be reshored.
And if you need subsidies to do it, I think that you do it for national security reasons,
first and foremost.
Makes no sense.
Yeah.
Yeah.
There are other industries where the private market works just fine.
And what we need to do to help those companies is simply not get in their way with unnecessary
red tape and regulations.
So I would say empower the free market when America is the winner.
And then in other areas where they're necessary for national security, then you have to be
willing to basically protect our industries. Freeberg, it seems like the great innovation here
might also be the American public getting upside.
When we gave loans to Solyndra and Tesla and Fisker
and a bunch of people for battery powered energy
under Obama, we just got paid back in some cases by Elon,
other people defaulted, but we didn't get equity.
What if we had, instead of getting our 500 million back in the loan from Elon, other people defaulted, but we didn't get equity. What if we had, instead of
getting our 500 million back in the loan from, from Elon, which he paid back early and with
interest, if we got half back and we got half in equity, RSUs, whatever, stock options, warrants,
this would be an incredible innovation. So what are your thoughts here? Because people look to
this podcast as, hey, the free market podcast, but this does seem to be a notable exception here of maybe we should get involved and do these
golden, you know, share votes, board seats, you know, maybe
more creative structures in order to win faster. What are
your thoughts?
I don't like it. I don't like the government markets, keep the
government out of the markets. It creates a slippery slope. First of all, I think markets don't operate well if
government's involved, it gets inefficient and that hurts consumers. It hurts productivity.
It hurts the economy. Second, I think it's a slippery slope. You do one thing now.
Let me ask you a question though. If government non-intervention results in all the steel
production moving offshore, if it results in all the rare earth processing and the rare earth magnet casting industries moving offshore.
In fact, not just moving offshore, but moving to an adversarial nation such that they can
just switch off our supply chain for pretty much every electric motor.
Is that an outcome of the
quote unquote free market that we should accept?
Well, then I think that's where the government can play a role in trade deals to manage that
effect. So you can create incentives that'll drive onshore manufacturing by increasing
the tariff or restricting trade with foreign countries so that there isn't a cheaper alternative,
which is obviously one of the plays that this Trump administration is trying to do.
I'd rather have that mechanism than the government making actual market-based decisions and business decisions.
You know how inefficient government runs.
You know how difficult it is to assume that that bureaucracy is actually ever gonna act and pick any best interest or any good interest at all.
They're just gonna get all up.
So I'd rather keep the government entirely out of the market, create a trade incentive where the trade incentive basically will drive private markets,
private capital to build that industry onshore here, because there isn't one and there's demand
for it. Because you've restricted access to the foreign market, that I think would be the best
general solution sex. And then I think it's a slippery slope, because then you could always
rationalize something being strategic, something being security interests in the United States.
So then every industry suddenly gets government intervention and government involvement.
And then the third thing is I don't want the government making money that the Congress
then says, Hey, we've got more money, we've got more revenue, let's spend more money.
Because then they'll create a bunch of waste and nonsense that'll arise from having increased
revenue.
One side and I will say one thing where I do think we do a poor job
is we don't do a good job to answer your question, J Cal, of investing the retirement funds that
we've mandated through social security. We should be taking the four and a half trillion dollars
that our social security beneficiaries have had deducted from their paychecks over many,
many years. And those social security future retirees or current retirees are getting
completely ripped off because their money is
being loaned to the federal government.
It's not being invested.
It's been loaned to the government to spend money
and run a deficit and ultimately inflate away
the value of the dollar.
We should have been investing those dollars
in some of these strategic assets.
So if ever there were to be shares or investment
that the government does, it should
be done through strategic investing
through the Social Security or Retirement Program. Similar, by the way, to what's done in Australia,
where these supers have created an extraordinary surplus of capital. Same in Norway, same in the
Middle East countries, incredible sovereign wealth funds that benefit the retirees and the population
at large. That's where the dollars should be invested from. I do think the fundamental
focus priority right now should be reforming Social Security while we still have the chance. We have
until 2032, when Social Security will be functionally bankrupt,
and everyone's going to get overtaxed and kids are going to
end up having to pay through inflation for the benefits of
the retirees of the last generation.
Feebriggs, right, we're on a seven year shot clock to when
Social Security is not funded.
And by the way, this opportunity to fix mandatory spending, it
was an opportunity to introduce when social security is not funded.
And by the way, this opportunity to fix mandatory spending,
it was an opportunity to introduce some structural reform
in social security.
Another reason why I think that there is a degree
of disgratsiat in this bill,
particularly with how Congress had acted
and not addressing what is becoming a critical issue
because everyone wants to get reelected
in the next 12 months, 18 months,
they've got elections coming up.
So everyone's scrambling to not mess with that
because you can't touch it.
It's like, you know what?
Guys, this is bankrupt in seven years.
It's gonna cost us five, 10 times as much
when we have to deal with it
when everyone runs out of money.
Deal with it now.
I have to say. Fix the problem.
And by the way, we should flip all that money,
four and a half trillion dollars
into an investment account for the retirees
where they can own equities
and they can make
investments in the markets, and they can participate in the
upside of American industry and the GDP growth that's coming.
Instead, they're getting paid 3.8% or four and a half percent
average from treasuries that they own that by the way are now
have a lower credit rating than they've ever had. You know, it's
crazy.
I'm in complete agreement with you. And I think it's a lack of
leadership on Trump's part. If'm in complete agreement with you. And I think it's a lack of leadership on Trump's part.
If Trump is going to criticize Taylor Swift and Zelensky
and Putin and everybody all day long on truth social,
he can criticize Congress and the Democrats
and the Republicans on not cutting spending.
I think he should speak up.
I think he was elected to do that.
It was a big part of the mandate.
And he should tone down
the tariff chaos and tone up the and lean into intelligent immigration, you know, recruiting
great talent to this country. And he should be pushing to make these bills control spending.
That's just one person's belief for the chairman dictator to mouth by haitya your czar David Sachs in that Chris brioni white shirt very beautiful and the
Sultan of science deep in his Wally era. I am the world's
greatest moderator and that's free bird will tell you the
executive producer for life here at the all in pockets. We'll
see you all next time. Bye bye Jason at all in. I love you
boys. Bye bye! Besties are gone. That's my dog taking a notice in your driveway. Sex.
Oh man.
My habitat sure will meet me at once.
We should all just get a room and just have one big huge orgy cause they're all just useless.
It's like this sexual tension that they just need to release somehow.
What your beep beep?
What your beep beep?
What your beep beep?
We need to get merch.
I'm doing all these other things.
I'm doing all these other things. I'm doing all these other things. What? You're a beep. What? You're a beep.
We need to get merch.