Big Technology Podcast - Economics of OpenAI, Tesla’s Robotics Pivot, Hedonic Treadmill — With Slate Money
Episode Date: May 8, 2024Felix Salmon, Emily Peck, and Elizabeth Spiers are the hosts of the Slate Money podcast. They join Big Technology to discuss the economics and societal implications of artificial intelligence and robo...tics. Tune in to hear their nuanced take on the costs, challenges, and potential paths forward for companies like OpenAI and Tesla as they pursue ambitious goals AI and robotics. We also cover the realities of retirement in modern economies and the ongoing debate over raising retirement ages. Join us for a thought-provoking conversation at the intersection of tech, business, and society, featuring experts who aren't afraid to challenge assumptions and dive deep into the details. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
The business of Open AI gets weird.
Tesla now wants to be a robotics company as its stock price drops,
plus when is it time to get off the hedonic treadmill?
All that and more coming up with the cast of Slate Money right after this.
Welcome to Big Technology Podcast,
a show for cool-headed, nuanced conversation of the tech world and beyond.
We have such a fun show today.
One I've been looking forward to the cast of the Slate Money podcast.
It's here to talk about a series of fun stories where tech economics and finance meet.
We're going to do a home and home series, so they're here, and then I'll come over to their show in a couple of weeks, and I'm pumped for that.
And so let's kick it off. I just want to welcome the cast here. Felix Salmon is here. He's the chief financial correspondent at Axios, Felix. Welcome. Thank you very much. Emily Peck is also here. She's the markets correspondent at Axios. Emily, welcome.
Hello, hello. I'm excited to be described as the member of a cast because now I feel like I play myself on Slate Money, so that's spinning my head.
Last but not least, Elizabeth Spires here.
She's a contributing writer for the New York Times opinion section, and she writes Slates Pay Dirt.
Elizabeth, welcome.
Thanks for having us.
Thanks for coming in.
I think that the combination of economics and tech is like very fascinating right now because we have this very weird situation where companies and investors keep plowing money into these AI startups.
And we're not really sure what the return is going to be, what they're actually using that money for, what the business outcome.
but yet we start to hear numbers like trillions of dollars of investment.
Really is that's what we're hearing now.
That was Sam Altman with one of the craziest numbers.
I think I might have called it deranged on X-Eos that I've ever heard.
He sort of came out and said, well, he didn't quite come out and say he was
reported to have said that he was looking to raise $7 trillion to build a whole new infrastructure
around AI, which is so far beyond any amount of investment that has ever been put into
anything ever, that it kind of, it kind of like makes you think that maybe he just doesn't
understand numbers? Well, it's also double the most valuable company in the world. And that's sort of
what makes this conversation he had on 20VC with Harry Stebbings really interesting. And sort of we can
riff on it because we are trying to find out what the economics of this AI business is. And so here's
what Stebbing says. He goes, in terms of marginal cost versus marginal revenue, how do we think about
margin about when marginal revenue exceeds marginal cost, basically? Like, are you going to have a
profitable business? And Sam goes, I mean, truly, I think of all the things we could talk about,
that is the most boring, no offense, that's the most boring question I can imagine.
Stevin goes, why is that boring? And Sam goes, well, you have to believe that the price of
compute will continue to fall on the value of AI as the models get better and better. And better.
we'll go up and up. And like the equation works out really easily. So that's all means of
view. I wouldn't know what his equation is because he does seem to talk about all of this as if
the numbers don't really matter. You're just putting in one bigger number and there's a smaller
one for the input. And for you, where's, where's the line between techno-optimism and techno-naivete?
So I can, so the argument he's making is like he's making two different arguments and both of them
makes sense, but I feel like he can't make both at the same time. The first argument he's
making, and this is the same argument that Jensen Huang has been making from Nvidia, which is
the price of compute has been coming down for decades, and it has now reached the point at which
AI is possible, and there is no indication that it's going to stop coming down. And so long as it
keeps on going down at the same rate that it has been coming down, which is more or less
Moore's law, you don't need to worry about the long-term price of compute because it's going to go to
zero very quickly. And then all you need to worry about is do I get any revenues? And if the
revenues are going up and then at some point those lines cross and you become a very profitable
company. That's a perfectly reasonable position to hold. And then, however, at the same time,
Sam Altman has this other position, which is basically in all.
order for the price of compute to come down to a level where AI is profitable, we need $7 trillion
of investment. And that is objectively something that is never going to happen. There just isn't
$7 trillion of freely available cash in the world to invest in anything. And if there was,
it would not be going into AI mostly. So that kind of, his own rhetoric is undercutting his own
rhetoric there. Can I inject some nuance here, which is that the $7 trillion is a number that
Sam hasn't fully confirmed yet. And it's also something that will be for compute and potentially
other things, maybe training. But I think the core of your argument is totally right, which is that
this stuff is going to cost a ton of money to train. And it does sort of contradict this idea
that the cost of computing is going to come down to zero. It has already cost a ton of money.
And every, if you look at right now, you know, how much it costs to perform a single chat GPT to give a single answer to a single question, how much it costs for mid-Journey to produce a single image.
It's a huge amount of money and all of these companies are losing money on every, you know, response, basically.
And this is a reprise of the famous blitzscaling model, you know, and Tim O'Reilly has a great column.
in the information about this.
He's basically, what is going on right now is you have a handful of companies led by Open
AI who are trying to invest as much money as they can as early as possible in order to gain
market share and IP and get the AI, you know, basically own the AIs to and have and reach a point
where no one else can afford to build one and or they own it in a way that, you know,
they have patents on it or something.
It's very unclear, but they want to monopolize AI going forwards.
And all of these incredibly high value, these multi-billion dollar valuations that we're seeing
only makes sense in a world where the companies have some kind of comparative advantage,
have some kind of monopoly on something.
And this is definitely the way the U.S. tech industry has evolved over
the course of this century, right?
You have a small handful of multi-trillion dollar tech companies that got that way by investing
a huge amount of money and getting a bunch of market share before anyone else and then
creating that kind of moat and becoming, you know, basically impossible to compete with.
And the bet that the investors are making is that the same thing is going to happen.
And there's just going to be a handful of AI companies rather than AI being a sort of broad
public utility like, say, TCP IP, that everyone can use.
Yeah, that also just creates an incentive for any tech CEO that's following that model
to kind of stick a finger in the air in terms of determining how much capital they need
and picking the biggest number possible, which seems to be part of what Altman's doing.
But the difference here to the last time around, the building of the big tech monopolies,
which seems to be how we ended up, is that the cost of entry into the AI space for a startup
is so high that you already have monopolies. Open AI is already pretty much a monopoly on
AI. I mean, it's, and it's mostly funded right by a big tech company. There's not, it doesn't seem like
there's a lot of innovation around the startup space because of the cost to entry is so high.
Well, we have, we have one genuine monopoly in AI, which is Nvidia, right? Everyone, everyone in
the AI space is using the same H-100 chip. And one of the, one of the, one of the, one of the, one of the,
reasons why Zammolman wants a much bigger, broader ecosystem is that he feels that it is unhealthy
for NVIDIA to be the only company making AI chips. And so it's like, I want to build fabs. Like,
okay, one, fabs do not cost seven trillion dollars. They cost like $50 billion. But two, like even
raising $50 billion to build a fab is very hard given that, you know, a large number of companies
have tried to build, you know, state-of-the-art fabs and have failed. Really only TSM, C,
has shown itself capable of building those fabs, right?
And so it's that like Nvidia TSM duopoly that is,
that really owns most of the sort of moat around here.
If Sam Altman's right and the cost of the compute does come down,
there's also the other side of the scale, right?
And the revenue piece.
And are we at the point yet, Alex, you follow this more closely,
that there's like a lot of money to be made in AI for real?
I know NVIDIA is making a lot of money
because it's selling chips to companies
who hope that they can make a lot of money from AI
but has anyone done anything where it's like
that's the iPhone of AI or whatever?
Like there's no consumer-facing AI product
that is making any revenue
but like you're right
in that kind of middle
there's like who was it?
I can remember it was some big consultancy company
you know Accenture or something
said that they just made $6 billion on AI consulting.
Yeah everyone is still like derivative stuff.
It's all derivative.
Like, it is very, very hard that one of the things that we have seen in, what is it, Alex, like a year and a half since ChachyPT came out and caused, you know, all of the, you know, crazy is that there is very little real consumer demand from normal human beings who are willing to, who want to pay cash for this.
The one last thing I think could be a moneymaker down the line is the labor cost savings.
Like Jeffrey Katzenberg had some quote in Axios today.
I think Dan Premack had it.
where he said, like, with AI, the timeline for making a movie is basically cut in half.
The amount of labor you need is cut in half.
That seems like amazing amounts of money, but it's not sexy, like some kind of consumer thing.
So I do want to push back on the idea that Open AI is a monopoly in this because you do have other companies.
And this is going to lead into your other question.
But you do have other companies building these frontier models, whether that are foundational models, whether that's meta with Lambda 3 and Anthropical.
with Claude, like Claude recently surpassed Open AI for a moment.
And then this is where the interesting question about the economics happens for me,
which is that, you know, if all these models become commodified,
like you're going to have Meta's Lama 3 available for free, open source,
then where's the actual value created?
And does it actually accrue to the model creators or to the people that build on top?
And I strongly believe that it's going to be accruing to the companies that build on top of these models,
whether that is a consumer product or business, this is like labor saving and these business
efficiencies, the companies that use them innovatively are the ones that like will actually
make the money here. And that sort of goes to Meta's bet that's like, let's just give this
away for free. It's not going to be worth really anything. And maybe that's going to Sam's point
that the cost of intelligence is going to be low. But then you have a real ROI question.
Yeah, no, this is exactly correct. I think Emily's point is very well taken, or Jeffrey Kastenberg's
point is very well taken that you know one way to make lots of money out of a technology is to
take the technology in charge for it another way to make lots of money out of a technology is to
take a technology and use it to cut your costs and that does seem to be something that people
are already doing with some genuine um profitable effect and businesses are doing and is going to
become much more common over the next few years. And that is going to be good for the economy and
that is going to be good for all of the companies that do it. And on some level, you know, if the
cost savings are high enough, then the companies will be willing to pay some non-trivial amount
of money for the AI that they're using. On the other hand, if the cost savings are kind of
the same no matter which AI you use and you know some of the AI is open source and or you can just
build your own with open source tools then they'll probably go that way and there won't be a lot of
direct revenues to the AI companies and so AI will be this force for productivity and profitability
in the economy and the AI companies themselves you know open AI anthropic and the rest of them
will turn out to be not particularly valuable.
And this, by the way, is an outcome
that Open AI has always envisaged, right?
Like, in the early days when they were asking for funding,
they said, we would like you to consider your, you know,
funding to be in the spirit of a donation.
And they're still a non-profit.
And if that is the outcome,
the Open AI winds up just making everyone else profitable
without being profitable itself,
That is a good outcome for the economy and that is a good outcome for the world.
Well, another factor here that I think, I'm not sure if Altman has spoken directly to this is that, you know, you don't have infinite data and the cost of data acquisition has not, it certainly isn't falling the way that cost of computing is falling.
You know, right now you have AI companies looking at buying traditional book publishers just so that they can add to the corpus of things that they're training the models on.
So the inherent value of the business isn't just about the algorithmic model.
It's about what you can do with it within the limitations of the data you have to train it on.
Are we thinking too small here?
That's like the other question that's coming up because there's another thing that Sam Altman said last week that went even more viral than the thing that I mentioned.
Whether we burn 500 million a year or 5 billion or 50 billion a year, I don't care.
I genuinely don't.
As long as we can, I think, stay on a trajectory.
where eventually we create way more value for society than that.
And as long as we can figure out a way to pay the bills, like, we're making AGI.
It's going to be expensive.
It's totally worth it.
So, yeah, I mean, it is, it really is, I hate to say this, but it kind of smells a little
bit like Sam Bankland Fried, you know, like not saying that he's a, but that kind of like,
it doesn't matter how much it costs just as long as you have like a positive EV somewhere
down the road, that idea of you can lose any amount of money just so long as the value of your
company is rising faster than the losses are piling up is a very dangerous game to play
if you don't really have a sort of, let's call it, three to four year plan for turning it into
profits. Like, Sam's idea here seems to be like, well, maybe at some point, 10 years down the
line, we will have AGI and then we will make lots of money. And there are two problems with that,
which is one, that 10 years down the line is a very long time to be burning $50 billion a year.
But two is that he seems to just assume that once there's AGI, then Open AI will be a trillion
dollar company and worth lots of money. And again, that's not obvious either.
To Elizabeth's point, you know, I think what I think Sam is already trying to move on from the LLM model, right?
Like right now, most of the AI that is getting most of the buzz are these large language models that need to be trained on a bunch of like existing language.
But I think everyone kind of is in agreement that if you're going to get AGI, you know, artificial general attention.
intelligence, it's not going to be a chatbot that basically gives language answers to
language questions and because it's trained on language models.
He's going to need to invest a huge amount of money in something much bigger than that.
We don't even know if AGI is possible.
So it's sort of putting any timeline underneath it is, you know, speculative.
And I realize that that's, you know, part of your job.
if you're working in an innovative, you know, frontier tech company.
But in the case of AGI specifically, even, you know, experts who have been studying this for decades
aren't sure that we will ever get to AGI.
So even making estimates about what sort of resources it would take and how long it is.
I mean, it's very strange to look at a company, you know, it's not a public company.
I guess, again, Altman can make sort of speculative statements about it.
but he does so with such confidence when the underlying goal is not even something we
know can be achieved right but this is one of the things that Silicon Valley VCs love is
people who have great confidence about things that are highly improbable and they have a bunch
of you know they've learned by looking that like that by if you fund someone who is
very confident about something that seems impossible then like like
Like, there's a good chance you'll lose all your money.
But there's also, like, those are the ones that have the biggest returns as well.
When it talks about its good, more value for society, like, what?
What are the problems that AGI solve?
Like, I can, like, rattle off many problems with society,
and none of them in my head can be solved by Sam Altman and his company at all.
I think the big answer on that front is scientific discovery.
Like, I think it's no accident that one of the things that deep mind will tell you about.
out is alpha fold, like in the first breath,
where they've been able to decode proteins
because they think that will help for drug discovery.
And maybe there's an idea that you train these bots
on all of the scientific literature
and you give it some problem sets.
And the things that they're able to do now,
or everybody's working on is reasoning.
So they can break it down to the component parts
and then try different solutions on each step
and eventually get you to a solution.
And I do, so I do wonder,
let's say we don't get to AGI, but let's
say we get some things that might you know maybe short but close right so these agents that take
action for us this ability to reasoning to reason scientific discovery um if making our everyday
business operations more efficient maybe that is something that's that's quite valuable i don't
i wouldn't you know happily burn 50 billion dollars a year on it but you know to earnestly take up
sam's case like maybe there is something there i guess it's like cure for cancer that's like the
cancer. But it's not, it's not just a cure for cancer. It's like a highly individualized cure for
cancer, right? It's it's it's the ability for an individual person with an individual genome to go in
and with an individual, you know, cancerous growth and get a, um, a treatment that is tailored for them
at a very low cost. Right now that kind of exists, but it costs like over a million dollars. And
If we can bring that down from a million dollars to, you know, $100, that's pretty revolutionary.
But all the money being spent on, the health care system is so inefficient and expensive,
and the problems are so basic. It's not to be all like there are starving people in, you know,
in other countries kind of an argument, but like there are more immediate and solvable health care
problems that these billions and billions and billions of dollars could go to solve to better society right now
versus, you know, spending $50 billion or $5 billion a year on something we may not ever come to fruition.
And maybe no one can afford in the final answer.
So, Emily, like, I don't, you know, like, if you look at the people who are funding this,
some of them do have what you might call quasi-philinthropic goals.
They're like, they do think of this as a form of, like, for-profit philanthropy.
Matthew Bishop would call it philanthropic capitalism.
And, you know, okay, fine, we can have a whole other segment on that if we want.
But I don't think anyone is, you know, I mean, okay, there are, there's a small pocket of true
believers saying that this is the first best place to invest money for the sake of the well-being
of the planet.
And if you want to, you know, help the poor, then this is the best way to do it.
That small pocket, like, kind of lost a lot of credibility when FTCS imploded because
because a lot of them were, you know, effective altruists of some flavor.
And I think we've kind of moved on from that.
I think that to say that it is not the first best philanthropic place to invest your money to help the poor,
is not to say that it's a bad investment.
I mean, I agree with that.
But I think there are two other things that, you know, we have to look at.
One is that a lot of the sort of strategic money that's going into AI right now is still just about AI hype.
And whenever you sort of scratch the surface of what,
a lot of people like Altman are saying, they're clearly relying on the fact that most people,
when they think about AI, can't distinguish between, say, a large language model or machine
learning or image-based generative AI. It's all just one category, and these are very
different technologies. And I know we were going to talk about Tesla a little bit. Elon Musk is now
claiming that Tesla is an AI company. And when I see that, I just see an attempt to get money
that's already flowing into a very specific sector to start flowing in his direction.
Well, he's calling it a robotics company, which is different than AI. We can talk about that.
He's also got a separate company called XAI, which is an AI company that he's raising like $5 billion.
He also said that his robotics model will be sin it by 2025. And if that, you know, that's
Elon says lots of things.
But to your point, Elizabeth, insofar as the people making these investments,
and to be clear, these investments are large, but they're not enormous.
They're like, you know, some fraction of the VC money out there,
and the VC money out there is some small fraction of a total, you know, investor base.
Insofar as the people making these investments are being silly
and making category errors and doing all of the things that you say that they're doing,
like these are VCs losing being silly and making category errors and is what VCs do and the whole point about VC money is it's risk capital that literally everyone who is invested in a VC fund can afford to lose like this is the correct money to make dumb bets that are going to lose this is not it is not dangerous for VCs to like a billion dollars on fire it is perfectly fine they always have and they always will
Can I, let me give me a counterpoint on that one.
A lot of the money funding these companies have come from the big tech companies, right?
So you think about Microsoft has been a huge funder of open AI.
And Google and Amazon have been huge funders of Anthropic.
And Meta has used its own money to build Lama 3.
So like try to find someone who's like really made, taken VC monies and put it into the development of large language models.
And it's a little bit tough to find it without big tech money.
So actually, I think what you have is instead of VC's taking,
this money and sort of squandering it, you have these tech companies taking the investment capital
of retail investors and institutions. It is not the investment capital of retail investors. It is
their own profits. These are all highly profitable companies. Microsoft is famously just giving
Azure compute more than it is actual cash dollars. Google has definitely invested a lot of money
into DeepMind over the years, you know, and Facebook has famously bought billions of dollars
worth of H-100 chips and, yeah, fine.
But this is money they can afford to spend, you know, and again, I'm not, you know,
these are already multi-trillion dollar companies.
If they burn a few billion dollars, they will still be multi-trillion dollar companies.
It's kind of no harm, no foul.
Okay.
So we've talked a little bit about.
about Elon Musk's trying to pivot to robotics within Tesla.
Why don't we take a break and come back and unpack that?
So we'll be back right after this.
Hey, everyone.
Let me tell you about The Hustle Daily Show,
a podcast filled with business, tech news,
and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email
for its irreverent and informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show,
where their team of writers break down the biggest business headlines.
in 15 minutes or less, and explain why you should care about them.
So, search for The Hustled Daily Show and your favorite podcast app, like the one you're
using right now.
And we're back here on Big Technology Podcast with the cast of Slate Money.
Great to have you all here.
Thanks for having.
We're cast members.
The cast members.
I remember when I joined Disney, they were like, congratulations on becoming a cast member.
And I was like, okay, this is a really weird company to work for.
Yes.
Well, okay, so maybe we'll use a different work for hosts of Slate Money.
How's that?
Anyway, so we talked a little bit before the break about the Tesla robotics play.
It's happening in this moment where Tesla seems to be in rough shape, and I know you've talked
about it on the show, but just for context, it's down 25% year-to-date, though it's up 8%
over the past like one year, which is interesting, sort of kind of lost in this narrative.
But BBC just had a story asking if the wheels have come off for Tesla, saying there was a time
where it seemed like it could do no wrong, but now the company is struggling, and it really captures
with falling car sales, intense competition from Chinese brands,
problems with the cyber truck, low sales have hit revenues and hurt profits,
and the share of price has gone more than a quarter since the start of the year.
It's now in the process of cutting 14,000 employees,
and it's also cut the entire team responsible for its much-admired supercharger network.
So what is going on with Tesla?
And then we can get in a little bit to this robotics pivot.
But what's the state?
I know you've talked a lot about it.
My big picture theory of Tesla is that it had first move for advantage and for a long time
its EVs were three years ahead of everyone else and they're not anymore and now they're
basically zero years ahead of everyone else.
It's, you know, or maybe like a tiny bit depending on what you're looking for.
And if you look at the stock market valuation.
You know, it is trading it 50 times forward earnings compared to standard company
standard car companies that trade it like four or five times forward earnings and good ones like, you know, Toyota
So
So something doesn't compute something doesn't add up that the idea
Behind that massive
Multiple that it trades on is that it has some kind of unique
Competitive advantage over the
rest of the car industry. And if you look around who's making the best EVs and the best
value EVs out there, you know, it's BYD. It's not Tesla. And we're talking about global
companies here. Tesla has a nice little advantage in the United States because the United States
government is doing everything it can to avoid Chinese EVs being sold here. So it gets to avoid
that competition in the U.S. But that's not the case in the rest of the world. And the rest of the world
Also, you know, on our show, we've talked about Tesla not infrequently as a meme stock.
And while it's not game stock, there is a lot of the, I think, the value of the stock is heavily wrapped up in Elon as a personality and a brand.
And so some of this, I think, is at least Ross Gerber, who's a big Tesla shareholder, argues that some of the fall in the stock price is really about Elon sort of being a chaos monkey within his own company.
Right.
And Elon, like, he can't stop founding new companies, right?
He's just, he's got, he's got X-A-I now, he's got Neurrelink, he's got the boring company, he's got Twitter.
I'm sure there's a few I'm forgetting.
SpaceX.
Oh, SpaceX, of course.
Yeah.
And, and like, you know, he's trying to do all of these things at once while tweeting maniacally through the whole thing.
And so at some point you have to ask, when does Elon stop being?
the reason why Tesla's multiple is 10x everyone else
and starts being actually a weight on the stock that is
and if he left, the stock price would go up rather than down.
I wonder if he's just, so on our show, I guess last week,
we talked about the supercharger situation, you know, layoffs
and cutting out this part of Tesla's business that is widely admired
and believed it can someday be profitable and why does this make sense?
And I tried to argue the, I think one of our readers called it the 4D chess, you know,
argument that like it seems so irrational.
There has to be some reason that Elon Musk did this, that like he can't be this like
unhinged and wild.
And so I kind of thought that, even though I'm not like exactly like an Elon Stan or anything.
And someone wrote in and was like, no, this was just really unhinged.
wild and his no one wants him to do this his own company doesn't didn't want this to happen he's
long history it's just possible the man is out of control now he has a long history of sporadic and
impulsive behavior too and sometimes people I think part of his lore is that you can be a certain
kind of charismatic entrepreneur and there there's a class of people who admires you for that kind
of chaos or the or the sort of very confident um you know impulsive decision making where
it's always framed as you know I went with my gut
And Elon sort of embodies that and some people admire it.
I personally think it's a sign of, you know, a CEO who's not terribly stable and I wouldn't like it if I were an investor.
But I understand the appeal to certain people.
But at a certain point, it's like the wheels have come off and the stuff you used to do isn't working anymore.
Like, you know, you used to never do your homework and get great math grades.
And then at some point, your math grades start going down and you have to put in the work.
it's problem exactly but isn't that telling him with a tiny bit short now i agree with a lot of
this but also like he did he has been able to build Tesla and space x is doing well i mean x is
i think a disaster but like so space like i think i think this is this is a super interesting
question is that the more that what you're doing is solving an engineering problem
the better he tends to do space x has
two big advantages. One is that he kind of doesn't touch it very much. He doesn't spend much
time on it. He has a woman named Gwynne Shotover who runs it, who by all accounts is excellent
and he kind of trusts her to do the right thing. And it runs itself. But also, it's solving
engineering problems. It's how do we get really heavy things up into space? And he's like,
I can solve that problem. In the early days of Tesla, what he had was an engineering problem.
How do I build an electric car?
Electric cars were something that didn't really exist.
He wanted to build an electric car that was more powerful and better and just as affordable as an ICE car.
And everyone said it couldn't be done and he did it.
And that was an engineering problem and that was his great contribution to the world, right?
He showed that it could be done.
But then having shown that it could be done, other people, especially in China, realized that they could
do it too. And now they are doing it too. And they're doing it frankly just as well, if not better
than he is. If you go further away from engineering problems into, say, you know, take boring
company, he thinks it's an engineering problem. Like, how do you build a tunnel? In fact, it's a, you know,
zoning problem and a transit problem and a trying to deal with local government problem. And he's
terrible at that. And it's going nowhere and it's a disaster. If you buy Twitter, there's no engineering
there at all. It's all about like working with humans and networks and moderation and all of this kind of
stuff. And he has no idea how to do that. So I think that, you know, there are things he's good at,
but the kind of things that Tesla needs to do in order to be successful going forward are not
really engineering problems. The world is not sitting here going, you know, EVs need to be
technologically much more advanced in order to be successful. No one.
is desperately holding their breath, waiting for, you know, full self-driving cars and
autonomy to arrive. Like, if it comes, it comes. But for the time being, if you want to
compete on EVs, you've got to compete, frankly, on cost. And it's very hard to compete
with the Chinese on cost. In fact, it's impossible. I agree with the top line thesis that
Elon's success with these companies is correlated to whether or not it's an engineering problem.
but I believe it for exactly the opposite reason that Felix does.
I don't think Elon's really an engineer.
And where he can...
Did I say he was an engineer?
Yeah, I didn't say that.
You did imply that he knows how to solve these engineering problems, and I don't think
that's what's happening.
I think where you see him being successful is at the very early stage of a company
when his two biggest skills are writing the check for capital-intensive business,
nobody else wants to put money in, and then managing shareholder expectations.
And then the more mature these companies get, the more he's not mediated by PR people and lawyers,
people sort of begin to understand that he's not the best manager.
His engineering capabilities are not barely existent.
He's not an engineer by, you know, education or trade.
No, he's a CEO.
And so the question is, is he a good CEO?
He's not a good product person necessarily.
And the things you expect a CEO to do are manage well,
manage shareholder expectations, communicate well externally, and that's where he's shooting himself
in the foot constantly.
Right.
And I think the more he gets involved in that product surflight, the weirder it gets, like,
as we saw with the cyber truck, which is clearly a creature of, you know, Elon Musk's product
manager, or all of the crazy back and forth insanity around Twitter blue and who gets check marks
and who doesn't.
And those kind of product decisions when he makes them are to tend to work out for.
badly. That said, you know, the Model S when it came out was as a product genuinely revolutionary
and amazing and everyone's mind was blown. The, you know, the amazing videos of SpaceX rockets
like landing vertically and staying upright after going to space. You're like, okay, that's a really
legitimately impressive product. Did Elon Musk personally design them? No, but he was, you know,
he has enough engineering now to at least kind of understand what's going on there.
So can we then think about this robotics thing as the next in the line of engineering problems
that he's tackled and tried to solve? And is that basically what's happening with this
pivot in terms of like his framing of Tesla as a robotics company?
I don't understand how the car is going to be robots like transformers? No, they're actually
building a robot. They have a humanoid robot called Optimus that they say they're going to release
Robotic automation and auto companies is, you know, that makes sense for Tesla.
But if you're talking about robots for general use, I don't understand it at all.
Yeah, what's this robot supposed to do, Alex?
I don't fully know.
I mean, it is supposed to be, I guess it's a humanoid robot.
You would imagine you could sort of put it into action the same way you would, like an LLM, except in the real world.
So something that's assistive, something I imagine.
can do work.
But we've had like Boston robotics or dynamics has been doing these like robot
demo for a while.
But they're not exactly like mass produced outside of like sometimes like the NYPD will
buy one and there'll be like this whole blow up around it.
There's a creepy robot in my supermarket that like follows you around and stuff.
Yeah.
Yeah, no, I've definitely had a couple of like cute little robots in hotels which will like
deliver your room service to you.
But the other thing that we have to mention about this sort of extended Elon universe is that he can kind of put whatever he likes wherever he likes.
This robot that he's talking about is maybe part of Tesla right now, but maybe it could suddenly turn out to be part of XAI if he woke up one morning and decided to change his mind.
You know, part of starting up XAI is actually him basically threatening the board of Tesla and saying, like, unless you give me another $100 billion worth of pay, I'm just going to do all of my sexy AI stuff somewhere else.
He said that quite explicitly.
You know, he famously brought a bunch of Tesla engineers over to Twitter after he bought it because he didn't trust any of the Twitter engineers.
So as an investor in any of Elon's companies, you kind of don't know what you're investing in because all of that money could just wind up benefiting a completely different company altogether.
Yeah.
So this is from interestingengineering.com.
They say the robot is designed to be a general purpose machine that can help humans in various domains such as manufacturing, construction, health care, and entertainment.
So that is the new Tesla.
You know, if you want to revolutionize the American economy, robot that can build houses would be amazing because the cost of building a house, the labor cost of building a house, is not only extremely high, but there just isn't enough labor to go around.
There's a massive labor shortage of people who are skilled enough to build a house.
And if we could get a bunch of robots to do that, that would be amazing for making housing more affordable.
Yeah, I'm watching a video of it now, and this robot is, like, taking things off an assembly line and stacking it in, like, special compartments in some containers.
So, who knows?
There already are, you know, a lot of robotics used in manufacturing.
It's not like there'd be a...
Yeah, but that's on, like, assembly lines.
And the idea is that if you put a sort of an AI chip into it, then it can work in, like, real world or, you know, situations like a building site.
Okay. As we're coming towards a close, I just want to talk about this thing that I've like had in my prep doc with Ron John for like months and haven't gotten around to it.
But I think this is the right crowd to talk about it with. And that is sort of when it's time to get off the hedonic treadmill and retire and whether retirement is still going to be a thing.
So I'll just set it up. There was this Reddit post where this person posted and they said after the first two to three million, a paid off home in a good car.
There's no difference in quality of life between you and Jeff Bezos.
Basically, like, the sooner that you figure this out, the happier you're going to be.
And time is the currency of life, not money.
And this Austin Reef, who's the founder of Morning Bruce, he posted this, and he, like, summarized their responses.
And he said, it's funny how everyone I know who has $2 to $3 million thinks the magic number is $10 million.
And everyone I know who has $10 million thinks the magic number is $25 million.
and everyone I know who is 25 million
thinks the magic number is 100 million.
Is he just saying, I know a lot of rich people?
I think that's kind of a humble brag,
but it is, I guess, like, let me turn it over
to the slate money crew on this one.
What do you think about this?
And do you, I mean, I guess like we're...
So the first thing we need to ask is, like, you know,
let's be clear about defining our terms.
What we're defining here is,
how much money can you be happy living on in the absence of any income?
How much money do you need to have in order to retire comfortably and have basically the same standard of living as Jeff Bezos to within, you know, five percentage points?
As Jeff Bezos.
Okay.
So, and that's, that's an interesting question.
But one of the, one of the ways that you need, the next question that you need, the next question that you need.
to ask is how much money are you making right now in income? Because to your point, Alex,
about the hedonic treadmill, the whole point of the hedonic treadmill is that you are a little
bit unsatisfied with your current income and you want a little bit more income. And that is not a
function of wealth. That's a function of income. Now, a lump sum of cash, an amount of wealth will
generate a certain amount of income. And for our purposes, let's just say 4%. Let's just say that,
you know, a lump sum of cash will generate a certain amount of real income in perpetuity of roughly
4%. So if you have a million dollars, that will give you $40,000 a year in real income in
perpetuity. So if you have amassed your million dollars of wealth by earning $150,000 a year,
and then you retire with a million dollars
and suddenly you have to live on $40,000 a year,
that's a major decrease in your standard of living.
If, however, you just graduated from college
and you inherited a million dollars a year
and you've never had $40,000 a year to live on
and you suddenly have this $40,000 a year income stream,
then it's an increase in your standard of living
and you can probably do that.
So I think there's two variables here, right?
It's not just the question of how much money
is enough. It's also how much income are you used to? And if you can reach that point where the amount
of money you have divided by 25 will is equal to your current income, then I think you're happy
and you can retire. And let me just talk about like this retirement question overall because more
and more we see that our social systems are overburdened. And I think a big political issue over the
next couple of years is going to be whether these things like social security continue to kick in
at the ages they do. And here's just one quote from Ben Shapiro. He said, no one in the United States
should be retiring at 65 years old. Frankly, I think retirement itself is a stupid idea unless you
have some sort of health problem. That is one man who should definitely retire. He has enough
money to retire. For the good of society. Yeah. Well, first, do you know, 30% of people retire between
the ages of 62 and 64, and then a bunch of people retire at 65. So people retire, I think,
earlier than they think they're going to retire. I don't really know what Ben Shapiro is talking about.
For many people, when to retire isn't really, they don't have as much agency in making that
decision, I think, than someone like a Ben Shapiro is imagining it. You know, you get laid off
from your last job because you're too expensive. Your company would rather hire someone 30 years
younger than you so that's what happens and all of a sudden you're out of work and you know you're
61 years old and no one wants to hire a 61 year old anymore so you're consulting and you're
basically retired or you get hurt on the job there's so many people you know without college degrees
that are doing some kind of physical labor and they bodies can't make it to 65 or 67 or
yeah i think ben also is just sort of incapable of imagining the lives of people who are not
white-collar elites, you know, when you look at people who retire earlier than 65, you know,
a lot of people don't even have retirement plans and they end up doing it just because
the work is exhausting, you know, if you're doing a job where you have to do hard labor,
or even, you know, positions where you're on your feet all day in retail for five, maybe six
days a week. And I think some of this, when Ben says he doesn't think people should retire,
I think he's reflecting a sentiment that's a little bit political, which is that, you know, work is inherently good and everyone should strive to work. And the reality is a lot of people work in really crappy jobs that make them miserable. So it's, you sort of have to ask yourself who benefits from that.
And then in terms of how much, how much you need, I think, and Felix has written about this, like, no one really knows how much they need in retirement. Like, it's a real big mystery. Like, you get to that end date and you have.
a lump sum of money but then like you don't know one of the parts of the equation which is like
how long you're going to live it's kind of a mystery and you hope for the best but you also need
the money to last until that best number is reached and i think it's there's a lot of anxiety there
in terms of like making that decision like oh i'm gonna stop bringing in money and like hope what i
have last for the next 20 30 years or something and and to be clear also like the people with
25 million and who think they need 100 million like at that point no but just to be clear those people
that do not think they need a hundred million because they are worried about burning through their
25 million those people think they need 100 million because at that point you start becoming more
ambitious in terms of how much money you want to have when you die and you want to leave money to
your family and your kids you want to leave money to charity you want to you know you want a certain
amount of wealth you want a certain amount of legacy you know but like once you have 25 million
there is almost a zero chance you're just going to spend it all unless you're sam o'tman the government
yeah on semiconductors do we think the government that has borrowed against social security are we going
about to see like a war on retirement as they try to figure out a way to raise the retirement age
I think if they can avoid it that won't happen because first of all social security enjoys
enormous bipartisan support. And there are people, Republicans specifically, who would rather
that not be the case because it makes it hard to kill entitlements generally. But it's given that
their base is one of the most rapidly aging segments of the population, it's going to be very
difficult to get anything passed politically that would actually, you know, put a dent in social
security as a program. I think, though, the raising the retirement age, that's something that could
see happening other, it's happened in other countries. People hate it and they basically protest and
riot. It's happened in this country. Yeah. And it does, I mean, it makes us sense. People do live
longer. And unfortunately, poor more poor people and low income people don't really live that much longer.
So I'm not sure about it as a policy overall. Okay. Well, we've talked about AI testing.
And the hedonic treadmill, I'd say it's a pretty diverse but super fun conversation.
So thank you to the Slate Money crew, the co-hosts, Felix, Emily, and Elizabeth, great at getting a chance to speak with you about this stuff.
And I really can't wait to hang out on your neck of the wood sometimes.
Thanks for having us.
It's been fun.
Thanks, Alex.
Thanks, Alex.
Thanks again.
Thanks, everybody.
We'll be back on Friday with Ron John Roy to break down the week's news.
Until then, we'll see you next time on Big Technology Podcast.