Investing Billions - E175: Elon Musk: 10 Billion Humanoid Robots by 2040? w/NEA Partner Aaron Jacobson
Episode Date: June 16, 2025Aaron Jacobson is one of the most insightful thinkers in AI, robotics, and cybersecurity—and in this conversation, he breaks down what’s real versus what’s hype. We go deep on the future of huma...noids, the growing threat landscape in cybersecurity, and why the biggest companies of the next decade will be built around infrastructure—not just models. Aaron is a partner at NEA, where he’s led investments in transformative companies like Databricks, Cohesity, and Uniphore. He brings two decades of experience in frontier tech and a rare ability to see where disruption is actually happening—and where it’s just noise. If you’re trying to understand the current moment in AI, or where the biggest investment opportunities will emerge over the next 10 years, this episode is essential.
Transcript
Discussion (0)
Elon Musk has predicted that by 2040, there's going to be over 10 billion
with a B humanoids on planet earth.
You've been studying this going back to your undergrad career over two decades ago.
What do you think about these predictions?
Huge fan of all the companies that he's built as well as him
being a technologist and futurist.
But I really view, uh, Al Alain's predictions like all of them.
They're super inspirational, but they're optimistic.
I think he's been promising self-driving cars now
for about 10 years.
And they're here, but in a very narrow fashion.
And they're nowhere as massively distributed
as like these bold predictions typically imply.
And I think what's interesting about this prediction
is he's learned from the past
and he's put a much longer time horizon on the prediction now.
But I still think he's off by multiple orders of magnitude because it's just going to take
much longer than people expect for us to get enough data and also for us to have breakthroughs
in the model architectures behind foundational robotic models to allow for general purpose
humanoids and even beyond the AI technology.
We're also going to have to think about how we scale up the supply chain because to get to allow for general purpose humanoids and even beyond the AI technology.
We're also going to have to think about how we scale up the supply chain because to get to that
many robots, we're going to need massive investments and motors and all the various components that you
would actually need to even build 10 billion humanoids. What is the most difficult part about building a general purpose humanoid today?
There's two problems, um, that are, that we're running into.
Um, the first is just the robustness of, uh, humanoids robots are historically very good at very narrow, very specific tasks, but as soon as you adjust one
small thing in the task, right, you might train a robot to be very good at folding shirts,
but you give it some genes and it fails.
Or maybe you put it in a room that has slightly lower light
and it fails on actually folding that shirt.
And so the robustness and the reliability of robotics
based off of today's foundational models aren't there.
And look, I think there's really three fundamental challenges
that we're gonna need to overcome if we're going to beat that.
The first is really the scaling law.
I mean, language models, I mean, they improve so fast just because of there was a pretty
quick understanding on the amount of data and compute required for us to actually get
exponential improvement performance of these language models.
But we're still quite early in understanding the relationship in the robotics world just
because of how much more complex navigating the physical world is relative to language. I mean just think about the human brain
and how many years it took for us to evolve language relative to like spatial awareness
and walking and moving and navigating a 3D space. LLMs basically assume that look more quantity is
better and you know eventually once we got to 15 trillion tokens which is the internet and some,
we started to see really magical results. But with robotics, we still won't really have the confidence
in the amount of data that matters.
And we're actually gonna start to see
generalizable behavior like we see with LLMs emerge.
And I think there's other aspects about data with robotics
and that quantity is not gonna be the only thing
that matters, quality is gonna be important too,
as well as the diversity of data.
Once you go to the real world, the amount of diversity, variance,
the combinations of what a robot will need to figure out to actually be able to solve a problem.
People out there may be holding some type of robotic hand or maybe tele-operating robots
in enough different situations that's diverse enough that's seeing a variety of environments,
different objects.
How are we going to do that that's cost-effective
so we can get enough data?
And then how much data is that going to be
in terms of running it through the GPUs
and the underlying compute costs
to actually train a model that's reliable?
And I think another part is the underlying model architecture.
I mean, transformers at the end of the day,
they're not that efficient.
They're good enough for LLMs.
We're able to get enough data and compute
at a reasonable enough cost to have magic be created
through open AI and entropic and llama.
But we don't have that magic yet in robotics
because it's still anyone's guess in terms of the order
of magnitude of the order of magnitude
the data and compute we need relative to the existing transformers architecture. If we found
an architecture that was, you know, a hundred times, a thousand times more innovative, then I
think that would really go a long way because it would start to function like the human brain.
I mean, I have twins that are now almost three and a half. I've been watching their evolution.
I've been watching their brains,
their LLMs work in real time.
And the things that they learn,
and I will just amaze me, I'll say,
how did you learn how to do that?
How did you even pick that up?
Like, who told you that?
Like, you're asking questions.
You don't even, like, how have you even seen enough data
to be asking questions like this?
Because the human brain is really efficient
at building a world model,
thinking about how the way the world works. And I think that's an open question, whether Transformers at building a world model, think about how the way the world works.
And I think that's an open question, whether Transformers actually has a world model or
whether it's regurgitating what it's seen before, obviously with some adjustment in
inference in terms of thinking differently based off of the patterns that it's been
trained on.
Steal me on your thesis a little bit.
What would make you change your mind about your timeline for humanoids?
And what would make you think that the timeline is being accelerated?
I would want to see strong evidence and progress on that scaling law where we
actually see an order of magnitude improvement on a robot's capabilities
across multiple platforms in a variety of different environments, uh, as well
as tasks, you know, maybe it's picking, packing, folding laundry, loading the dishwasher, being able to introduce new,
new objects has never seen before and having it figure out and do that, uh,
with a significant improvement in terms of accuracy.
Like if you start to see that, um, that would start to make me believe.
So I'm predicting a hundred X or a thousand X evolution and AI
capabilities over just the next two, three years.
or 1000X evolution and AI capabilities over just the next two, three years.
Why would that not lead to massive evolution in humanoids and a decrease in costs of development?
Let's talk about the AI space itself.
The jury has been out on whether LLMs are where value will accrue in the ecosystem.
What are your views on this?
Important part of LLMs is whether they're closed models or open models,
closed models being like open a anthropic, where you have to access that model
through either chat, GBT, or maybe an API open models being, you know, llama, uh,
out of meta, uh, deep seek and Quan out of China, and what you can actually take
that model, uh, run it yourself, uh yourself. You have a lot more control and the ability to flex and change and customize
that model, and then you either run it yourself or try to get as much dollars
into that company.
Also thinking about risk reward and time to value and all that, but you typically
want to get a lot of money into that company.
And so it starts to make sense to start to have these different ways to win as you build
and help create some of the generational companies that are out there.
How should people follow you on social?
Yeah, you can find me on X, Aaron E.J.
You can also find me on LinkedIn, which I'm posting on and spend a lot of time on.
You can also reach out to me, ajjjacobsonatnea.com.
Always love meeting founders and talking
about the future of cyber AI and robotics.
Thank you, Aaron, for taking the time.
Look forward to catching up soon.
Thanks, David.