Moonshots with Peter Diamandis - Robotics CEO: The Humanoid Robot Revolution Is Real & It Starts Now w/ Bernt Bornich & David Blundin | EP #188
Episode Date: August 15, 2025Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends David Blundin is the founder & GP of Link Ventures Bernt Bornich is the Founder and CEO at 1X Robo...tics, a competitor to Figure and Optimus. – My companies: Test what’s going on inside your body at https://qr.diamandis.com/fountainlifepodcast Reverse the age of my skin using the same cream at https://qr.diamandis.com/oneskinpod Apply to David's and my new fund: https://qr.diamandis.com/linkventures –- Connect with Bernt: X Linkedin Learn more about 1X Tech: https://www.1x.tech Connect with David X LinkedIn Connect with Peter: X Instagram Listen to MOONSHOTS: Apple YouTube – *Recorded on July 28th, 2025 *Views are my own thoughts; not Financial, Medical, or Legal Advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You think about robots in the world probably more than anybody else.
What's your vision 10 years from now?
First of all, what will happen is...
Everybody, we're here at OneX Technologies in Palo Alto.
Burnt Bornec, the CEO, founder, Neogamma 1 and Neo Gamma 2 over here.
I imagine we're going to have the same level of AI eventually in the robot,
where I feel like I'm talking to a fully intelligent being.
And all that is grounded, right?
that actually understands what this existence is.
I'm shocked by that.
I'm shocked by that too.
How do we solve the remaining really hard problems in science?
This is not going to happen without humanoid.
It's almost existential to us for human happiness.
So Salim is constantly saying,
have it look like an octopus and let it operate in all the elegance that an octopus can,
rather than trying to constrain it into five fingers on this hand
that do certain things and manipulate objects the way we're supposed to manipulate them.
So what's a definitive answer to him?
So let's just say humanoid is a thing.
Now that's a moonshot, ladies and gentlemen.
Dave Blundon, my moonshot mate, and Neo Gamma.
Neogamma 1 and Neogamma 2 over here.
And we just did a tour of the facility, and it's pretty extraordinary.
We saw probably dozens of neogammas in different stage of development.
They literally manufacture everything from head to toe.
and how many components inside Neo Gamma, roughly?
Oh, top secret.
Top secret.
But it's in the hundreds, not the thousands.
Okay.
I'm pretty proud of.
I just secured my first Neo Gamma at my home by end of the years.
Is that right?
Oh, yeah.
Okay, fantastic.
Great.
So we're about to do a podcast either with burnt or with Neo Gamma, depending upon what you want.
And let's go ahead.
We'll go back to the podcast area.
Would you lead the way and maybe clear the way for me?
Awesome.
By the way, those bags over there, Neo-Gama can carry those.
Over half, suddenly.
Yeah.
I'm not the Neo-Gama.
Can I give this to you to carry?
You can try. It might hit some safety limits, but it usually works.
All right. Arms up.
Figure it properly.
There you go.
You can let it go and you can take a few steps.
let it go and you can take a few steps. There you go. It might, after a few steps,
decide that, like, this is a bit unsafe for me. Thank you, Neo.
It's incredibly strong. All right. And it's nice to know that Neogamma will
clean up the house around you. Yeah. Well, listen, I'm not sure what number you are,
but I want to say thank you so much. Thanks for cleaning up. Of course. Thank you for your time.
A pleasure.
A pleasure, and thank you very much as well.
I want to be polite.
You know, you never know when the robot overlords are going to, like, come after us.
I want you to remember, I was really polite.
I was really, I was really polite.
Okay, I'm safe, great.
Have you ever been in love?
I mean, you meet all these other robots.
I mean, some of them have got to be turning you on.
No?
You should take a look at 41.
41.
Okay, gamma 41 is your gig.
Okay, got it.
Thank you.
Okay, oh, listen, Burns here.
Let's stop this conversation.
Sorry, sorry.
Do behave.
Everybody, welcome to Moonshots. I'm here with my Moonshot mate, Dave Blundon.
Salimus Mail is offline with his kids this weekend, or his son this weekend.
But I'm here in particular with the CEO and founder of OneX Technologies.
Berg-Bornick, a pleasure, Bert.
Awesome.
Looking forward to this one.
Thank you.
Yeah, I mean, we just finished this tour, and it's pretty extraordinary.
When did you move into these facilities here recently?
It's like one and a half months ago?
Nice.
Well, I mean, just many levels.
of people building robots.
No robots building robots yet.
We're getting there, but not yet.
So, I mean, when I, you know,
I'm very familiar with the robotics
in the humanoid robot space.
And while companies like figure
and Tesla are focused initially going
into factories, automotive factories
in particular, you made a commitment to the home.
Yes.
And personally, I'm excited about that.
But I'd like to start with why the home.
there's like to me there's two like there's a lot of reasons but there's like two main reasons
now the first one is kind of obvious which is just like i mean consumer hardware just scales at a
different pace than everything else right yeah like we got to more than a billion devices of the
iPhone in like a bit more than a decade and to me humanoid robots does not make sense unless
it's at scale right there's always a better automation system that you can use for one specific
problem. You need scale so that you really get this incredible reliability, incredibly low cost,
incredible ecosystem and intelligence. Now, the slightly deeper one is also that intelligence
comes from diversity. And this has been very clear actually from all the way in the beginning,
really, in all kinds of AI research and also now more practical applications of AI across
all different domains, where it is like a language model or an image model or a video model,
or in this case, a robotics model,
you don't really need data
of the same thing over and over.
If you think about it, it's very logic, right?
So if you're in an automotive factor,
you're basically doing the same thing over and over again.
You're not learning new stuff.
Yeah, and we actually have some data on this.
We have some real data because our previous generation,
Humanoid Eve, we deployed that into both guarding and logistics
back in 2022, 2022, 23.
And about 20 to 40 hours,
our robots kind of plateau and stop learning
for that specific task.
Depends on how complex it is.
Like if you're guarding a facility
and you're kind of driving around
because that had wheels,
but it was a humanoid wheels,
opening the doors and like,
there's some diversity to that.
So then you're more on like the 40 plus hours.
And if you're just like moving this cup
from here to over here all day, right?
Then like you're in the lower end of 20.
And there's just no path from there
to like general intelligence.
And we are maybe kind of a bit of,
different than the rest of the humanoid space in this, that I see more as a company really
running towards our AGI and how can we come there as fast as possible, versus how can we
apply labor in industrial or similar settings.
So it's robotics in service of building true AGI models and getting enough new rich
data to train up these models.
You said 20 hours, 40 hours for security guard robot.
What's the equivalent for all the variety?
of things you can do in the home.
How many hours of...
We don't know yet.
So, tens of thousands of...
Yeah, so, like, at our current scale,
we don't really see any kind of cap on diversity.
It'll get there and we'll need to diversify.
But I think, like, you ask a very important question, right?
Because we want to talk about, like, what is the goal?
And, like, to me, it's not just AI or robotics.
It's a combination.
Because if you think about what this is, like,
what is abundance, right? It's an abundance of knowledge or intelligence, kind of multiplied
by an abundance of labor or goods and services. You kind of need both. And they follow
hand in hand. And we can talk more about that. But like the constraints we have in society
aren't always only on the intelligence or data layer. They also are on the substrate that
we're building on. Right. Every week, my team and I study the top 10 technology metatrends
that will transform industries over the decade ahead.
Trends ranging from humanoid robotics, AGI, and quantum computing to transport, energy, longevity, and more, there's no fluff.
Only the most important stuff that matters, that impacts our lives, our companies, and our careers.
If you want me to share these metatrends with you, I writing a newsletter twice a week, sending it out as a short two-minute read via email.
And if you want to discover the most important meta-trends 10 years before anyone else, this reports for you.
Readers include founders and CEOs from the world's most disruptive companies and entrepreneurs,
building the world's most disruptive tech. It's not for you if you don't want to be informed about
what's coming, why it matters, and how you can benefit from it. To subscribe for free, go to demandist.com
slash metatrends to gain access to the trends 10 years before anyone else. All right, now back
to this episode. So when I think about it, I imagine this is like why a toddler crawling around
playing, investigating the physics universe that's in interacting with different people and different
things is learning and building a model in its neocortex.
And so is that basically the same, your neogamma is a infant learning in a diverse
environment?
It is.
And I think just like, to some extent for humans through, right, also, but it's more
pronounced in other animals, like how much of this kind of intelligence is innate and part
of your instincts.
You don't want your robot to just go around randomly doing anything.
You want it to try to do things that might succeed.
So there is room here for the more kind of called classical AI models
where we're training based on internet data, simulation data, synthetic data,
everything that everyone else is doing that's useful to get you off ground.
But it doesn't fully get you there.
It gets you to something that does something seemingly kind of maybe useful.
and then you can experiment.
You can have the robot really have this interactive learning loop
where it's learning in the real world,
and that can get you.
We don't know how far it can get you right.
We don't know it.
And this whole topic of data gathering, you know,
it's amazing watching them walk around the building here
and walk around the kitchen.
They're so unintimidating.
You walk right up to it intuitively.
You don't feel like it's ever going to do anything awkward,
hit you or anything like that.
So that's got to be incredibly important.
You're saying it's cozy.
It's cozy.
It's cozy and it's, you know, it doesn't seem to break the glasses or anything.
So, I mean, that's got to be really core to the data gathering mission, right?
Because you have to, like you said, let it experiment.
Otherwise, how is it going to learn?
So along that lines, what design elements did you build into Neo Gamma to make it for the home?
Sure.
This actually goes all the way back to the founding of the company a decade ago now.
Really, I've been in the field for a long time.
So I kind of started the comp since I was a kid.
You're building robots at age, what?
I was 11 when I decided that I was going to like do humanoid.
What was your humanoid robot that you modeled?
Was it Star Wars?
Was it Star Trek?
What was it?
Lost in Space.
Honda Asimaux.
What's that?
Honda.
Honda.
Asimov.
Yeah, it's a beautiful robot, right?
They started very early.
And you can check out like the Honda.
It takes Asimo
Like there's more modern ones
But like the Honda Asimo P6
Was like end of the 90s
Yeah
And that was walking upstairs
Yeah
Running around stage
Giving someone a ball
Like
It greeted President Obama
I think at one point
That was a bit later
But yes
They
And like
It was so ahead of its time
Right
Yeah
But I built a lot of stuff
Up through the years
But I think
Importantly when I started
The company
I sat down
And I thought really deep
About this like
Okay
There's all these amazing
robots that we worked on. And it didn't really work. Why didn't it work? Right? And then it comes
down to these like phenomenal principles of first of all, if you actually want to make something that's
scalable with respect to intelligence, it needs to be able to live and learn among us. And there's
just so many nuances to this through everyday life, right? Everything we do is social. Like work is
social. Every task is social. And we navigate these social situations.
all the time while we do the things we do.
And most of the world's labor also happens in a social context
in that there are other people around you when you do it.
Objects have social context, right?
The coffee cup is empty?
Do you need a new, is it dirty?
Or do you want a refill or do you keep your cup out through the day?
And there's this like, the role of diversity that you want to access.
So, kind of if you're a big believer in that, then it boils down to,
okay, the robot needs to be safe.
From my first principles point of you, not able to harm people.
it still needs to be very capable
and it needs to be as strong as a human
then it just needs to be incredibly affordable
like you need to find this beautiful
combination where you can simplify
simplify simplify and still get a very capable system
so that you can manufacture this at scale
and really drive quality up and cost down right
so that was the founding principle of the company
a decade of gosh they said like we're going to make robots
that are safe capable and affordable
and by affordable I mean
it's going to be like first principles
manufacturable and affordable
very lightweight, very energy efficient so you can have a small battery, very few parts,
the sign in a manner that doesn't require tight tolerances, no special alloys or materials,
and just like incredibly simple but performant.
That's really what we've always set out to do.
And that's also why it took a decade, right?
Because like there's so much novel research that's been done in the company to get to
where we have these tendon driven robots that have.
What's the vision there relative to the car, say?
Like one in every household, two.
You mentioned the iPhone, you know, the go direct-to-consumer iPhone sales get to, you know, a billion.
But it's exactly one per person is pretty obvious, right?
But robots could be two, could be four, it could be rare.
I've done that poll, and everybody routinely says I would have at least two, depending on the price point, right?
So price point-wise, you know, when I think about this, what I've heard is, you know, 30K, 20-K, we've seen Chinese robots in my
much cheaper price points, but not as capable as Neo-Gama. Do you have a price point that
you're thinking about? You're not far off. It's cheaper than what people think. It's quite
interesting because I think this is very important. I want to make sure that we are not only making
the best product. We want to be price competitive. I think that's going to be incredibly important.
and we are actually still price competitive with the Chinese ones,
but you have to count, like you said, it's not the same, right?
So if you think about the number of degrees of freedom that the robot has,
like how much capability, basically, how many joints,
then we actually have a significantly lower cost.
So I think we've done a really good job on reducing complexity to get there.
The numbers that I keep in my mind is like 30K purchase or 300,000,
bucks a month to lease, 10 bucks a day, 40 cents an hour. Am I in the right range there?
Yeah, I think we could do better, but yes. Okay, that's fantastic. But I mean, do you need to do
better? No, I mean, in a heartbeat, in a heartbeat, that's good enough. Yeah. But in that case,
I think people could imagine owning a couple of those robots. So I think it really depends on
the lens you see this through. So I think clearly everyone's going to want a robot. And I think
there's this beautiful thing about the companion aspect of this,
which is so underrated, right?
Because the humanoid is just such a beautiful interface for AI.
And when you talk to it and you're like, see the body language,
it can look at you, it sees who's talking to it,
direction of all these things.
Like all my 11-year-old daughter can do,
which has the robot, she just wants to sit next to it and couch
and talk about things, right?
And that is clearly going to be such a big aspect of it.
And I see that like, it's not, I say,
It's not another pet, but it's not another human either.
It's something kind of in between.
And like I said, like, it's kind of like my Hobbs.
Like, if you've ever read Calvin and Hobbs, it's the Hobbs.
And I think it's going to be incredibly exciting to see how these relationships develop.
Because it's the thing that will be like around you all your life, right?
It will remember everything about you.
And it's going to like...
I have really, if you compare a C3PO and that vision of a assistant robot,
and you compare it to what you've actually built.
The two things that jump out of me right away.
One, it's soft.
It's not like a metal outside.
And two, the voice is perfect.
Oh, thank you.
Like when you're speaking to it, you immediately are disarmed and you just talk to it
because it doesn't have a C-3PO robotic voice.
It has just a perfectly soothing normal voice.
And it's very responsive to anything you say, any gesture or anything.
So I imagine that these robots will all have advanced AIs at the,
the level of, you know, a GPT-5 or, you know, a Gemini 3. And in so being, those robots will
be hyper-intelligent and able to understand fully and answer what you need. And once they've
learned the physics models fully, do whatever you need. You've made a decision to build your
AI systems in-house. And I find that fascinating. And in fact, a number of the other robot
companies, humanoid
robot companies are not going to put you into a
comparison mode here, but I have made that same
decision versus
partnering with the large hyperscalers.
Can you speak to that?
Well, we're not doing the same thing.
I mean, to me, intelligence does not
begin with language.
Language is
degenerative artificial construct
that we have come up with. And it's
incredible. I mean, it's such a
efficient, compressed way of conveying
meaning and instruction.
So, language is very useful, but it's not the core of your intelligence.
The core of your intelligence is spatial and temporal, and it has to do with how you perceive
the world around you, both with respect to how you see the world, but also how you feel
the world, right?
And we're getting to where we're seeing that, like, models that are native to that modality
and then you add text will be more intelligent and more powerful than the language first.
I mean, I've read about intelligence and the belief is that you needed embodiment for intelligence to exist and language for intelligence to scale.
I don't feel that I can prove.
I don't have rigorous proof that embodiment is needed.
I do have very, very strong proof that from an engineering perspective, it's just a way you see your path.
So if you think about like the information in that.
the world. And can you access this? You could train a world model that can predict video and
tell you like, hey, here's a new video frame, right? Render this for you. You could, in theory,
you could probably train that only on text. Like, if you have enough text descriptions of
things, maybe at some point you could like get a high enough signal noise that you actually can
get something useful out. At least if you kind of like have some feedback loop with like some
RLHF or something where you're like, am I happy with this frame?
But I mean, why would you do that?
That's just like such an inefficient way of doing that.
Of course, you train on video because you're going to like output video, right?
So from that perspective, I think it's just obvious that like you need all the modalities that we experience if you want to get to first and foremost, like human level intelligence and hopefully pass that again.
But then I think there's one other thing about robot.
There's two other things actually that are quite important when it comes to learning.
And the first one is quite obvious, and I think we all kind of identify this, which is like robots can do interactive learning, right?
So you interact with the world and therefore you can learn.
But if you think about it more from a kind of academic point of view of how those intelligence kind of evolve, how you get reasoning, all these things, then what we generally do is that we have some observation of the world.
We kind of know how the world works.
So I know that if I do this, I know what is going to happen, right?
I've seen this before.
So I actually start with that.
And now I have a goal.
I want to pick up the cop.
So now I have a model of the world.
I have a goal of picking up the cup.
I take an action.
I know which action I took.
I know the action I took was to like reach for and grasp the cop.
And then I observe the result.
If you look at the internet or in general, you can look at YouTube, right?
all you have is just the observations.
You don't have any of the mental model of like the person in that video.
You don't know which actions they took.
You don't know what they tried to achieve.
You only have the observation.
This is not how we learn.
You can actually bring it all the way back to the scientific method.
It's like you should have a theory.
Come up with a hypothesis.
You test your hypothesis.
You observe the result.
And then you do it again and you learn.
And that is just not possible with the internet data.
So there's...
Definitely impossible with the next token, you know,
internet scrape and with all the video scrape.
So then in these limited domains like coding and physics experiments,
you can actually have that same experience,
but it's only within that domain.
Like coding is a good example.
Like, oh, let me try writing it this way.
It didn't work.
Let me try writing it that way.
It didn't work.
So you get very, very good at that narrow domain.
Still have no intuition about how the world works.
No.
You can do simulation, no.
So again, it's hard to prove that this won't work, right?
So, sure, if you have a really good simulator and you,
really scale simulation on learning and simulation
with agents. Maybe you can get something
similar. But
I mean, the fidelity of a simulator is nowhere
near the real world. And it's just
like so incredibly hard to get there
and close that gap. And it's also
so compute inefficient compared to just being
in the real world. That I think for me
it boils down to not this academic
exercise of like proving who's right and wrong.
It's more, what's the engineering
approach that makes sense here? And it's just
a way shorter path.
You mentioned before in our
conversation, the amount of data that's being collected relative to Google or YouTube or Tesla.
Can you speak to that?
I mean, your mission is get as much possible data during the day of an interaction of these robots in the home.
Yes, I mean, you can do some napkin math, right?
And of course, we don't know exactly like what is the most useful data from which model artists, etc.
But if you think about it, if you have 10,000 robots out there and they gather data most of the day,
then that is more data than the like non-duplicated useful data that gets uploaded to YouTube each day.
So already at that scale, you actually have like your fleet of robots generating more useful data than YouTube.
So that's just a 10,000.
And then if you think about like how we scale manufacturing here as this starts deploying into society,
you actually very quickly come to the conclusion that like, you know what?
The internet isn't actually that big.
Like you're going to have way more data from robots.
that you're going to have from the internet.
So I want to hit some numbers here just to set them as foundations.
You built hundreds of the Neo Gamma, roughly,
but you've got a new manufacturing plant that you're about to open.
Can you give us a sense of, and then another one that's in plans, right,
without disclosing anything you're not willing to,
but can you give me a sense of by the end of 26,
how many you're manufacturing on an annual,
run rate. And then in 27, 28, what's the growth path you imagine? Yeah. First of all, just
small correction. We haven't built more than 100 of the cameras. But we've built more than
100 of the robots. Right. There's been multiple versions. But the factory run rate,
end of 2026, is north of 20K. 20,000 annual list. Annual, yep. Of course, there's a ramp
to get there. So you don't reach quite that number in 2026. A couple thousand a month.
now the factory after that is kind of like we're trying to follow an order of magnitude right
we're not going to quite be able to do that i think the iPhone ramp is a very good comparison here
where you see like they almost double but like you have a few plateaus as you reach certain scales
where you run into problems and there are some quite interesting problems that if you're
going to scale the manufacturing of humanoids to the iPhone level right because you run out of
some basic stuff, like aluminum, for example.
You don't use old aluminum on the planet.
That's not what I mean.
But there's a certain amount of percentage of current refinement of aluminum
you can use before you start to really struggle sourcing aluminum.
And that might be a challenge.
The iPhone ramp was about doubling.
That's an interesting stat I hadn't even thought of.
I mean, you get to a billion.
It's more like 1.7.
1.7 annualized over, wow, that's not as fast as a...
So you can imagine 100,000.
exponentials are quite far.
Yeah, no, I know.
We've heard it.
So, but you can imagine a run rate before the end of this decade of hundreds of thousands
per year.
End of this decade.
Way more.
Way, way more.
Yeah.
Now, at that point, you need to really think about, like, what are the things that
will slow you down, right?
And it comes down to refining, like mining and refinement, of course.
But increasingly, it actually comes down to labor.
you're not going to get there
without really using robots for labor
if you think about the iPhone ramp
then Apple kind of displaced
large part of
like the Chinese population
across the country
for labor and they still
ran out of labor and had to expand
into neighboring countries
now I think we've done an incredible job
in the design so it's very few parts
it's very simple to assemble
but it's still more
It's still more complicated to assemble than an iPhone.
Instead, I was going to say, it looks more complicated.
Yeah, it is.
It is more complicated than iPhone, right?
So let's say it takes five times as long, so we need five times as much labor as the iPhone.
Okay.
Then you're in trouble.
Then you're in trouble.
So it's got to be solved.
So, like, you have to automate, right?
Yeah.
And, of course, that's the goal anyway.
Like, we want to get as quickly as possible to what I call like this hard takeoff moment
where you have robots, robots building out the data centers, the chip fab, the energy infrastructure.
And what can we learn from the car, actually?
So you've got the iPhone, fewer parts, one-fifth the labor per unit.
Then over here you have a car.
How does the part count compare to a car?
So we have a few hundred parts.
A car has 50,000 roughly.
50,000.
So it's much simpler to get scale.
And I mean, a car weighs 4,000 pounds.
Yeah, a lot of material.
Our robot weighs 66.
Okay.
So I think, like, it's not really comparable to a car.
I've seen a lot of light of space compare humanoids to cars,
but I think then you should go back to the drawboard, to be honest.
Like, it's not a car.
If you do a really good job, it's closer to a refrigerator.
It's a very complicated refrigerator, but it's closer to a refrigerator than cars.
Let's dive into a little bit of the, let's shape the understanding of the robot for our viewers and listeners.
66 pounds.
Let's talk about battery life, its abilities, you know, describe it from a specific.
specific stats point of view, if you would.
Oh, yeah, sure.
So I think, first of all, I think the most important stat is this huggable.
It's huggable.
Yes, it is huggable.
I have hugged a robot, yes.
And, like, this is just a safety and how it is to feel safe in its space, soft.
But from a pure stats point of view, it's 66 pounds.
It can lift about 150 pounds.
Which is amazing.
I mean, in terms of the weight strength ratio is huge.
It is the weight to strength ratio of an athletic human.
Yeah.
And then it can carry like about 50 pounds around.
That's what you hopefully saw earlier here.
And battery life is about four hours.
Rechargeable in half an hour?
Half an hour?
Half of it.
So like two hours if you use the full battery.
Now, actually, interestingly enough, I have one in my house, right?
So I'm starting to get some data on this now.
It's five foot four, five foot five?
What is it?
Five foot four.
Five foot four.
Okay.
I think so.
That's a perfect height, by the way, just in case you're wondering.
Yeah, it's also the height of my wife.
It's also the height of my wife.
So, it's like, I agree with you.
It's mine, so that's good.
So it's, but I mean, it's, what's very interesting.
Like, once you start actually using the product, right?
You notice a lot of things that you don't usually show up on a spec sheet.
Like, the robot is completely quiet.
And that's not a coincidence.
That's something we worked so hard on.
And the first time you put this in your home and you think, like, the robot's very quiet, it's fine.
you put it in your home and you're like first day it's fine second day is a bit annoying like
third day are like oh man is it going to leave my living room soon because like this sound right
it's such a requirement for like just dead quiet right you're gonna have this in no space
charging wise don't really ever run into the problem because the robot just takes like these
micro breaks every now and then where it's not doing something and like I actually don't care
that much about how many hours it can run I care about that it charges fast enough that
it can just always do whatever I want to do nice um yeah um yeah um um um um um um um um um um um um um um
Well, and I want to talk about quickly, since you said that, specifications, like the number of degrees of freedom, right?
Which basically is how many joints does the robot have, right?
Yes.
So, like, humans have, like, six joints in their, each leg, that's 12.
If you have seven in each arms, that's 14 more.
So now you're like 12 plus 14.
That's 26.
You see a lot of robots today that have 26.
That's quite common.
Usually they don't have the wrists.
They actually have the neck instead.
So two here, and then you're like at 26.
We have three here.
So you have proper expression with your head.
head. It's quite important. We have all the seven
from here. We have three in the spine. And then of course we have
22 in each hand. I mean, what I saw in the arm design was
incredible. How many how many do humans have in my hand?
22. So you matched it. Well, okay, depending on how you
count your capular bones. So like the small bones that you have here
allow you to cup your hand. You could to some extent see
that that's more like four or five degrees of freedom. Not really
too. So then the humans have a bit more. But functionally, it's quite similar. And this,
again, just is incredibly important to be able to do all those tasks in a home. But also from
an AI perspective, there are like, we talked about diversity initially, right? It is the one metric
for intelligence. And the diversity of your data. Diversity of environmental data.
Well, diversity of your data. And your diversity comes from two things, the limit to the diversity
you can achieve. It comes from the environment you're deploying.
So, right, if you're in a factory doing the same thing every day, it doesn't matter how good your robot is, it's not going to be diverse.
And then, how capable is your robot?
How many things can it do, right?
Because if it kind of do any kind of like in-hand manipulation or handling like soft deformables or all these kind of things or delicate objects or whatever, then you get no data of that.
So like, you really have to kind of go max max on both, right, if you want to maximize your diversity.
It was about 18 months ago that I partnered with one of my closest and most brilliant friends, Dave London, to
start Link exponential ventures. At Link, we manage about a billion dollars of seed stage money
based at a Kendall Square in Cambridge, right between MIT and Harvard. When Dave and I both graduated
from MIT, each of us immediately started companies. But at that age, everything is working against you.
You have an idea, you're challenged to raise money, and you can't afford rent. And even with all the
accelerators out there, you're competing against thousands of other startups for the same pool of
investors. Both Dave and I have spent a big chunk of our lives focusing on how do we inspire
and support founders to knock down those barriers, to go big, to create wealth, to impact the
world, to build and scale as fast as possible, especially in today's AI Everything world. We're seeing
so many companies reaching multi-billion dollar valuations in just two to three years faster than
ever before. Some companies are adding millions or tens of millions of dollars of value in just
weeks. So we started asking ourselves, how do we help these founders go faster and not skip a
beat? As an example, a couple of months ago, we bought an apartment building adjacent to MIT
where a graduating entrepreneur can move in immediately without slowing down their tech build
while they search for a place to live. And so we're doing everything we can to accelerate builders
and their super smart teams. Of course, funding is part of it, mentoring is part of it,
connecting them with my personal network of abundance-minded CEOs and investors as part of.
of it. We house 66,000 square feet of purpose-built incubator space and 26 AI startups call
Link XPV their home. And the returns have been amazing. I have nothing to ask, but if you are
building a company in the AI era, check us out at LinkVentures.com. Now back to the episode.
Yeah, a geeky question for you, but I'm really, really curious to know because when you build
something physical and then you attach a neural net to it, it's actually very hard to tell whether
the constraint in what it can and can't do is in the neural net or in the physical construction
of the hardware. Is there any way to decouple that and debug the two different size or is it
just incredibly impossible? Once it's meshed together, you just can't. Well, we have a pretty good
neural net here. Yeah. So usually the way our approach there says, can we do it in teleop? And if we
can, the right neural net can do it with enough data. And that's generally been proven to be true.
If we manage to do something tele-up, it's just like, okay, now we need a lot of diverse data of similar tasks.
So we get some transfer learning and we need a lot of that specific task.
And then almost irrespective of like how complicated that task is, you can get it to work.
Now, of course, that doesn't mean you can get everything to work with generalization across.
We're not there yet.
But you can see that like, okay, you can get the neural network to do this.
Now we need to scale it.
So we kind of get this beautiful transfer of knowledge between tasks and like our distribution generalization.
and all these things that we currently see in our limbs,
that we don't see that much in robotics yet.
We have some pretty cool stuff internally where we see some science.
When I'm picturing those, you ask it to make Cripe Suzette,
or you ask it to do microsurgery, and it can't quite do it.
And then you say, well, look, the hardware guy is claiming the hardware is good enough.
It must be the software guy.
And then the software guy is saying, no, no, no, the software, the neural net is fine.
The hardware just can't do it.
And then they fight it out.
And then we just say, like, well,
bring in our best tell operator and we say like he can do it then the hardware can do it clearly
it's like proof of access yeah okay so that's that's where i was going so you have you have a remote
operator option yeah you can control the hardware oh that's really interesting so then you get like well
but we're getting to where this gets hard where we can kind of no longer do this because because
the hands are just so good and they have very high fidelity tactile feedback the human hands are so
good no the robot hands okay so they have the humans hands are still even better but like the
problem there is the robot hands are really really good and they have really fast highly detailed
tactile yeah and we can't really transfer this efficiently enough from the human yeah because the
teleoperator is using some kind of yeah i mean back to the x price right the avatar challenge so
it's a really hard problem to transfer that fast enough so now we start to see that the robot actually
learns how to do manipulation way better from reinforcement learning in real so you're like actually
have the robot like interactively learning real how to handle objects and it can do things that
the operator could just dream of. So that, so now we're going to screw. No, we can't do that
anymore. I want to talk about three things in sequence, teleoperations versus full automation,
safety in the home and privacy in the home. Yes. Because those have got to be critically important
as you're entering the home. So the robots, we saw the neogamil,
out here operating in teleoperator mode but also in AI full AI mode right and it was able to do both
and its AI systems are going to increasingly get better and better and more capable again as we're
talking as I'm talking to Gemini 3 or GROC 4 or you know GPT 5 soon I'm talking to a highly intelligent
human and getting a feeling that it understands what I want and it's able to
you know, sort of like take
action on my requests. I imagine
we're going to have the same level of
AI eventually in the robot
where I feel like I'm talking to a fully
intelligent being
in one sense. Oh yeah,
clearly. And one that is grounded, right?
That actually understands
to some extent what this
existence is, which today's LLMs
are kind of like, they have this kind of abstract
notion of it, but it kind of like, it's a
facade that kind of quickly falls away. If you start
probe at it.
But that will get there.
In the tele-operations mode, you've got
humans wearing VR headsets and using
haptic controls.
What are the humans doing?
They're giving slightly more like high-level commands.
So just guiding like, hey, put your hands over here,
like grasp this thing.
You don't want to like over-constrained a system.
You want to give it some opportunity to kind of like solve
for how to do the task.
So we kind of have like,
the learning coming up from the bottom and kind of like enabling a more and more abstract
interface for the operator and then we have the learning of like all the large amounts of data
we have coming kind of from top and getting more and more like the general behavior that
you want the robot to do and they kind of like meet in the middle right where the operator goes
away you're using for you're using automation and teleoperations always together in that regard
and learning so everything that enables the robot to do anything that the operator does
tell operator is fully learned end-to-end.
The network outputs torques to the motors.
That's very similar to what Tesla and Elon Musk were saying,
where the self-driving car was originally all C++ code with a little bit of neural net,
maybe 80% C++, 20% neural net.
Then every year that went by became more neural net.
And now there's no...
300,000 lines of C++ were eliminated.
Yeah, just a few guardrails left, and the rest is just one neural net.
so same thing here
it's all weight
right
like the code is just a few hundred lines
it is it really
yeah that's all
what's the parameter counter is that all super secret
it's kind of secret but it's
it will be small if you compare it like
today's neural networks
because it's running on the robot very fast
yeah it's kind of like your muscle neural
system but it does take in vision
so it's not very small
well that begs a question of dying to ask
which is, you know, you've seen Ex Machina, right?
When I saw that movie, I'm like, why...
Don't go dystopian on this here, okay?
Why is the brain, the blue blob, in the head?
Yeah.
Why isn't it in the server room?
So learning is shared between all robots.
Well, yeah, and it can be much bigger.
And, you know, like if half the power of the robot is going into the thinking,
you could say, you could run twice as long on a battery charge
if you move it over to the server room,
and just communicate remote.
So, why did you choose to put it in a head?
aside from being anthropomorphic and cool no it has nothing to do with that okay so there's some
simple answers to that which is uh i mean the head is where nothing else is unless you put the brain
there um like the rest is yeah like it's it's like everything else is pretty freaking full
like it's building a humanoid with this kind of like power level in such a miniaturized form
and still having like enough space to make it like completely soft and all this it's really hard
engineering problem. So it's like, where are we going to put this if we don't put it in the head,
if you're going to put it on the physical robot? Now, there's some other argument. The very high
bandwidth thing that happens in your brain is vision and to some extent audio, smell, right,
tactile, but vision just dominates. Yeah. And you just want to minimize the distance between your eyes
and the compute. For real. So the bandwidth between the sensors, the eye mostly wouldn't make it
over the home Wi-Fi.
Well, it wouldn't even make it down to the stomach of the robot.
Really?
Without getting overly complicated on which physical interfaces you would choose for this transfer.
It's very high bandwidth.
I'm shocked by that.
I'm shocked by that too.
But I mean, we're running like no LiDR, no structured light, no wrist cameras, no nothing.
We're running pure like emulation of human mission, right?
Yeah.
So we're relying so heavy on that.
So it's a very, very high resolution, very high bandwidth, very high frequency.
that's funny because that's exactly where the human brain is very close to the eyes too
it is now that doesn't mean that you can't do things in the cloud and we do things in the cloud
but it kind of becomes hierarchical from an intelligence point of view just like if you think
about like your your kind of like your muscle nervous system this runs quite fast right
it usually runs at like 25 hertz and it doesn't necessarily go it doesn't go up to your brain
there are neurons distributed out through your system that makes decisions right
We have this in the robot.
We have some of our stuff pushed to the power electronics that controls.
Just for latency, just for speed.
Yeah.
And then you have the brain itself, which actually runs pretty fast, right?
It's usually like between 5 and 10 hertz and very, even though it's 5 to 10 hertz, it's very low latency.
And this runs on the robot.
Now, if you're running more like a 1 hertz streaming thing, then you're typically in LLM first time to token land, right?
Yeah.
That runs off board.
but that can't solve the like high frequency tactile feedback manipulation tasks that's too slow
okay the first time my neogama learns to crack open an egg to make an omelet the question is do
all neogamas then learn that are you shared learning they do now there's a shared learning in the
sense that you can say like this data goes to the cloud model that is doing this for all neogamas
but there's also the distributed models so of course there would be like a nightly checkpoint where like
hey, this model is better, we have more data, we validated this,
we get into safety, which I'll talk about later,
when it comes to how you validate the models.
And then we deploy that to all the robots.
So even though it's distributed on the robots,
they can still learn from each other, of course.
It's just you need to do like one hop through the server layer
and do the training and propagate this out.
There is a future not so far away
where I'm pretty bullish on there being a lot of federated learning
happening on device.
And this has to do with how do we have
your companion really throughout life
learn from all of the experiences that are there to you
but private. Yes. So all robots will not be the same, but they will
share an intelligence backbone. Let's go into the conversation of
privacy and safety. So you're inviting these robots into your home
and where there will be activities that you may not want
shared with the world. And then of course you're asleep and the robot is
running tasks at night. You don't want to wake up in the
morning and find your safe has been opened and the robot's gone, you know, to talk about,
or you don't want the robot to be, you know, taking care of your aging mother and find out
that it's, you know, giving her shots of scotch at night when she asked for them.
I mean, so how do you deal with safety and privacy?
The last one is the hardest one, by the way.
We can get back to that.
Okay.
Not giving grandma, Scott.
Shots of scotch.
Because, you know, generally models are, they're always kind of tuned.
to be kind of sycophants and they end up doing whatever you ask them to do.
So if we start with the privacy side, I think first of all, it's just a lot about transparency.
Like if you're one of the first people like you, Peter, that will have a Neo Gamma in your house.
We are kind of trading a bit on privacy versus being an early adopter because without the data,
we come to make the product better.
Of course, we're going to do everything we can to make sure this is privacy on your terms and that you are in control.
but we do need your data if we're going to make the product.
Sure.
So, I mean, listen, I give my data to Google, to Amazon, to X all the time.
And I mean, people don't realize that you're sitting in the home having a discussion with your spouse.
And Amazon's, you know, Alexa's listening, right?
Siri's listening.
But they're doing something very important, which we also do, which is no human in our company can hear or see that data.
Yes.
That is going into the.
training model yes but it doesn't go by a human right now if we want to look at that data
and sometimes you might need to right might be like let's figure out what happens here because
something so something clearly is happening across multiple robots that we want to figure out what
is then we'll send you a notification on your phone or say like hey this specific window
we want to review the data and you'll get a video of like what that data is yeah and then if you say
yes then we get the description key and we can look at the data if we don't if you say no
then we can't. So that you're in the control of that. And actually even with respect to going
into the training data, we always run like a 24 hour delay on training. So if there is something
that you really don't want even in the training data, like this never happened. I'll erase it
from existence. You can go in and delete it before it gets into the training weights. I just want everybody
here, there is a, there are policies and plans that make this acceptable and are used by technology
companies and you're going to be implementing the best of those.
It's a pretty old compromise, actually.
A lot of them.
But there is like, so the mode I talked about now is when the robot is what we call best effort
autonomy, right, which is most of the time.
It's what you saw earlier today where you talk to it.
You ask it to do something.
Hopefully it does the right thing.
If it doesn't do the right thing, then you can say bad robot and hopefully it's better
the next time.
But it's actually, this is learning in real, right?
This is really like interactive learning.
And the robot, interesting enough, actually progresses faster on tasks when it fails.
than when it succeeds.
Sure.
It learns more from failures, just as we do.
But in this mode, that's the privacy.
Now, when it comes to teleop, then, of course, there's no way you can do this task without
seeing the glass.
So we do some abstractions so that you actually don't see people, people.
You kind of like just see blobs and like you just see the object or interacting with.
And we can do a lot on interaction, on like the filtering side to ensure privacy.
But the most important thing we do here is that no one goes into teleop in your robot
unless you approve it, right?
And it's very visible on the robot.
Like the lighting jane is.
And it's like someone is in your robot.
And it's one of the pre-selected operators that you have approved from like a large set of operators.
Like here are the four that services you.
So that's kind of like inviting your cleaner or whatever into your house.
Another human.
Another human into your house.
And you just need to make sure that they're actually invited.
So to actually take a second and spell us out in more detail, in the early days when I have Neogam in my home,
it'll be baseline autonomous but there will be times where it needs to bring in teleoperator
and so you'll have teleoperators and headquarters that if it needs help or doing something complicated
or it gets something wrong the teleoperator can step in and actually make the task happen
yeah it's in the beginning it's actually there's two different modes okay so you have the mode
which are called like the best effort autonomy that we just talked about yeah and then
you have task scheduling.
It's like my robot at home now
is doing that. So I take my phone
and I schedule and say like, hey,
between these hours, here are the tasks
I want you to do for me. Today is like, do my white
laundry. And then
there's a package coming from Instacart.
You can receive it at a door and unpack it in the fridge.
And it's just like generally tidy.
And I've given it when
I'm not home. Like these hours, I'm at work.
Just get it done. Right. Now, I don't
care if that happens autonomously or to a
teleoperator. Right? So a lot of that
happens for a tell operator because some of these tasks are quite complicated and we don't know
how to automate them well enough yet now of course that teleoperator uses autonomy to help
improve the efficiency so it's not all ten operation but I don't really care about the mix the task
gets done yes so we kind of split it like that and then there's a gray zone kind of in the middle
if it's like you want to like I don't know having your friends over a party and you want the
robot to be the bartender and we don't have like a bartender mode yet then you can
improve a teleoperator to do that.
So you can all of the, in most of all the videos we see of optimists at, you know,
Tesla's diner or at their events are teleoperations.
They are.
But I think teleoperation has gotten this kind of like underserved bad reputation or name.
Why?
I think this because people don't have enough like clarity in like, hey, is this teleop or is it
autonomous?
But it is just labeled data.
It's expert demonstrations.
If you look at any of the big AI models that were trained,
there was an enormous amount of people that sat down and hand-labeled data
and looked at examples, rolled out question answers,
and bootstrapped kind of like this highly, very high-quality data set for this to work, right?
So you pre-chain on like general information.
We also do that, just everything that's happened with the robot.
Then you have a fine-tuned data set that is very high quality.
And in robotics, that is teleoperation.
because it's the expert demonstrations.
It's the hand-labeled data.
It's not different.
It's just I think there's some lack of transparency
in what's going on.
Well, I think the objection is if you have a demo,
like a video, it makes it look like it can do something
and it actually can't because you hand-coded it.
Well, it clearly can, but it can't do it on.
Yeah, I can't do it.
But I think you're dead on that if it can physically do that.
If the mechanism can do that,
the neural net will fill in that blind spot instantly anyway.
You know, once you've trained it,
So I think that it's perfectly legit.
And now it's time for probably the most important segment,
the health tech segment of Moonshots.
It was about a decade ago where a dear friend of mine,
who was incredible health,
goes to the hospital with a pain in his side,
only to find out he's got stage four cancer.
A few years later, a fraternity brother of mine dies in his sleep.
He was young.
He dies in his sleep from a heart attack.
And that's when I realized people truly have no idea
what's going on inside their bodies unless they look.
We're all optimists about our hands.
health. But did you know that 70% of heart attacks happen without any preceding? No shortness
of breath, no pain? Most cancers are detected way too late at stage three or stage four. And the sad fact
is that we have all the technology we need to detect and prevent these diseases at scale.
And that's when I knew I had to do something. I figured everyone should have access to this tech
to find and prevent disease before it's too late. So I partnered with a group of incredible
entrepreneurs and friends Tony Robbins, Bob Hurry, Bill Cap, to pull together all the key tech
and the best physicians and scientists to start something called Fountain Life. Annually, I go to Fountain Life
to get a digital upload. 200 gigabytes of data about my body, hit the toe, collected in four hours,
to understand what's going on. All that data is fed to our AIs, our medical team. Every year,
it's a non-negotiable for me. I have nothing to ask of you other than, please, become the CEO of your own health.
understand how good your body is at hiding disease and have an understanding of what's going on you can go to
fountain life.com to talk to one of my team members there that's fountainlife.com
i want to jump into another fun subject which is the uncanny valley and the face so um talk to i mean
you've probably had endless conversations internally about how do you make a face look how human do you
make it, how skin-like do you make it, how do you represent it? Can you tell us sort of philosophically
what you and Dar who on your design team, how do you think about that? Where do you make it human
enough? Is this very delicate line where you want to make sure like body language comes across
crystal clear because that's like that's the magic of the device, right, of the companion.
yes but at the same time you don't want it to get like to where your kind of instincts tell you hey something's wrong like this is a human but there's something wrong with it you don't want it to be a human and it's it's actually pretty surprising that there's this there is this gap where people clearly identify this as like hey this is a being i identify when i understand his body language and everything but it's also clearly not a human yeah yeah and you want to be in that space
and then where you are in that space
kind of like depends a bit on who you ask
people have like a different threshold here
so we're trying to hit kind of like in a middle of that
and ensure that for as many people as possible
this is just like an incredibly easy to understand product
but at the same time that is not creepy
and I think adoption here
by the way talking about scale
adoption is so important
an adoption of new technology
usually takes some time
because there's just like this knowledge barrier
right there's a barrier to entry
even using a phone there's a barrier to entry
yeah
this interface is just so natural
like there is no barrier to entry
yeah it's something you just talk to like a person
you know what's incredibly cool to me
is that there are like 50 things
around the house that I don't know how to do
including the freaking
the other way to backwash the pool
like all this crap
the robot can
in real time access the information
learn how to do it and just do it
I can't do that
it would take me an hour to study
and there's no laborer
that's going to come into the house
and do it for under like 400 bucks
and say it's like there's so many things
that are in that category
where I'm not trying to replace a human being
I'm doing something that there literally
was no other option for
because the knowledge is obscure
and there's so many
of those things around a house now, like resetting the water heater keeps going out and the reset
process, but you can look it up, the robot can look it up. Yeah. And just go do it. And, you know,
it's like, this is micro units of work essentially, right? Like, you need five minutes of,
but you need hyper-specialized microunits of work. Yeah. And you need like five minutes of it
every now and then. It's super high value to you. Yeah. And it's just really hard to get to it.
You know, the shopback, you know, the shop back, you can run it forward or backward. It's
There's a manual there.
You could read the manual.
I just want to get, like, this crap off the garage floor.
The robot will know how to the shop back works because somebody else's robot.
One of the other 10,000 is already done it.
Make my perfect tariaki salmon on the grill.
Yeah, some obscure mixing, some food.
Which brings us to something, though, you talked about earlier.
Scotch, grandma and safety.
So I do hope to make you a perfect salmon tariarchy.
Thank you.
But I have to do it myself.
I'm not going to let the robot do it.
because that's one of the things we're actually not doing when we're launching now
and that is due to safety because what I worked so hard on right for this decade is to make
robots that are safe intrinsically and what I mean by that is just like if something goes
really wrong and accidentally it hits you that might be painful but it's not going to be
likely to severely harm you interest right and once you pick up a kettle of boiling water
there's no more guarantee that you are safe, right?
So we generally avoid any kind of dangerous objects
so that we can ensure safety in the beginning.
Of course, over time, as the AI improves
and we get more and more certainty on all behaviors being safe,
we will allow cooking and other things.
So we're doing internal projects on this,
but we're not going to be rolling it out to the customers in the beginning
just due to safety concerns.
Yeah.
Cooking and safety is a...
It's a real problem for humans.
It's not an easy thing.
I mean, it's a real problem for humans, too.
It is.
But there's the notion of intrinsic safety.
This is incredibly important.
And then it's the safety of the AI.
And this is the reason we have a white paper out on this
that I recommend if you guys are interested, read it.
But why we have started very early, betting extremely heavily on role models.
And role models?
World.
A world models.
World models.
They are, of course, the currently best known path towards ATI.
But even more importantly for us, like short term, as we progress here on like the data collection and model training, they give us this incredible opportunity to automate evaluation of models, including safety and red teaming and all these things.
So you can think about like if you train a new model and now you want to know if it's better than the previous one and you can deploy it to all your customers and you can get some vibe check a few days later and like, hey, are people more happy now?
It's generally how it's done.
You don't want to do that with a physical system, right?
You can't do that with an autonomous car either.
What the world model actually is,
it is a model that is able to generate what will happen
if you take specific actions.
So you can think about it like a video model
where you ask it to do something.
And then it actually gets not only like the question of what to do,
gets the actions to do so.
And it gives you back not just a video,
but how the world feels, like the force is everything for the robot.
So it's essentially like the robot's in the matrix.
We take the robot, we put it in a world model,
and it doesn't know that it's in a world model.
It thinks it's in a real world, and it does its things,
and we ask it to do the things we're usually doing around the house,
and we see what it does, and we can put in lots of automated checks,
so I'm sure that both that it's performing better,
but also that is not doing anything that can be deemed unsafe.
So it's really like this incredibly important and powerful evaluation
tool and that starts to solving the problem right do you think that's why you guys and figure and
Tesla are getting monster valuations because is is the valuation just purely hey we're going to
sell 10,000 20,000 then 200,000 whereas no the world model is such a unique asset and so
valuable in thousands of different ways and that that becomes a you know very much a self-feeding
barrier to entry and and that could also explain like do you do you plan to productize
that core capability?
Yeah, so to me,
back to like our mission
is to create an abundance
of artificial labor.
And that goes across
both the digital and the physical.
So yes, it will be productized.
Still a bit out, but like, yes, this will be
productized.
Because it seems like no matter how much factory capacity
you build, it would be,
it wouldn't be till like 2028,
2029 that you could diversify
into all these like microsurgery
and warehouses and,
you know, drones, all that.
But that same world model could apply to those much sooner,
but you'd have to somehow get it in the hands of many companies.
Well, actually, I think revenue from the robots will dominate forever.
I do think, like, the real physical world has way higher value than people think.
I mean, just for folks to realize, right, we're at $110 trillion global GDP,
and labor is half of that, right?
So the TAM, total addressable market here, is like $50-plus trillion.
Just if you keep doing what we already do, but we do what we'll be.
It's going to be so much bigger.
And you've attracted some incredible early investors.
Do you mind just sharing who's come into your cap stack?
I think maybe like we have some big classical ventures like SoftBank, Target Global,
NVIDIA, EQT, NVIDIA, Open AI.
So there's some good names in there.
A lot more.
That's damn good.
I think it's becoming increasingly clear, right, that the ball name.
in society to superintelligence is not better algorithms or scraping the internet in a more
thorough way.
It's better data.
Yeah, it's better data.
And then you need a robot to generate this data.
But even more importantly, it's the physical parts, right?
You need more data centers.
You need more power.
You need like to do this, you need more labor.
And it's kind of like this is this bootstrapping problem.
And if you just break down the pyramid and you say like superintelligence consists of this incredible amount of data and it consists of this substrate of compute and power, then you see that humanoids is a solution to both of them.
And if you just do the math, you'll see that you're probably not going to get there without that.
You're just running out of these like basic constraints.
And I think humanoid will be surprisingly useful, surprisingly fast.
Not perfect, but it's going to be surprisingly useful, surprisingly early.
I have a question on behalf of Salim Ismail, who's typically our third moonshot mate here.
I have to ask, because I have to ask him his behalf.
So Salim is constantly saying, why two arms, why two legs, why not six arms, why we need to have a humanoid form?
I mean, in the kitchen, wouldn't it be better to have an extra pair of arms?
So what's the definitive answer to him?
Well, I think, first of all, it's kind of right.
Like, humanoid isn't the only thing that will work.
I do think that, and we've looked a lot, like, I don't know of any form factor that is as general as a human in doing kind of like any kind of labor in any kind of environment.
And we've tried to simplify.
We've tried to increase complexity.
Like, this is pretty good machine.
So if your goal is to just be as general as possible, then you need a humanoid.
Now, if your goal is to transfer knowledge from humans,
it's a lot easier if you have a humanoid.
I think that's the most important part of the equation there.
It's very important.
You're not going to transfer to a six-armed robot learnings from a human.
It's at least harder.
And then, I mean, the world is made for humans.
It's like Jensen says, right?
Like it's Brownfield deployment.
It's very true.
And then I think lastly, it's just,
do you want to live with a six-legged robot in your kitchen?
But I view humanoids as kind of like the pinnacle of general technology.
But there is this kind of repeat pattern through history of this happening with like zero to one novel products.
So if you think about, say, the computer, started with big mainframe computers, solving very specialized tasks.
The equivalent in robotics would be industrial robotics, right?
Now comes the PC or even because.
for the PC, like the kind of like VIX or Atari's or whatever, like more general computers.
And this gets produced at such a scale that it just becomes generally available.
And now it's super high quality and it's incredibly reliable.
It's because it's a huge ecosystem and it just becomes the best way to solve any problem.
Even though, and here's the argument against human noise, right, it's already complicated for a task.
Even though it's overly complicated for the task, when you take your beautiful Apple here
and you write a word document.
I mean, that's the most complicated typewriter I can think of.
Humanity, like, mastered nanoscale chip manufacturing for you to have a typewriter.
But it's still actually the cheapest, most reliable typewriter,
because it's just like made at such a scale.
Humanoid's exactly the same.
Now, if you see what happens to computers now,
because the market has become so big, it starts to actually become segmented again.
And now you see you can carve out niches in computing.
And they're still so large that it has scale.
So now you get specialized compute for AI, specialized computer for physics,
specialize for all kinds of things, right?
Yeah.
And this is because it's become so big.
Now, the same will happen in robotics.
So we will get to where we have Star Wars.
There will be different drones doing different tasks,
and they will look kind of like more specialized to like my repair drone with like six arms
and like scissors hands and I don't know.
Like, it'll get there, but you have to go through this humanoid face first.
So let's just say humanoid is a face.
Yeah.
My favorite robot is still data from Star Trek.
It's a great robot.
Yeah.
It's kind of the closest thing I think of to what you're building.
You know, lovable, happy robot that you can give a hug to.
Do you have a favorite robot?
Well, I wouldn't have thought of data, but now that you said data, that's top of the food chain.
Everybody loves R2D2 because for some reason R2D2 has no voice, even though they have voice technology everywhere.
weeks. All those visions, though, are built around what Hollywood could easily get on a set.
Yeah. I think the humanoid form factor, though, has, there's another aspect that you kind of
touched on, but when I bring it into my house, I have a vision of what it can do and what it can't do
based on humans. Right. And so I ask it to do things that are rational and not irrational,
because I know what a person could do. If I had a six-legged thing that Celine came up with,
I'm not quite sure, like, should it be able to climb on the roof and fix the shingles or not?
I don't know, like, what this thing's capability are.
So it breaks the whole kind of, like, comfort zone of the user.
The thing it does surprise me, though, about the robots are unbelievably coordinated between themselves.
And there's some good demos at MIT.
They're just mind-blowing.
But when you have two movers trying to take a couch up the stairs, and they're like, it's like the stooges, right?
They're like, oh, move a little, a little bit.
But when you see the equivalent act with two robots, they're just in phase.
And they just do it seamlessly.
So I think there's a very high probability that the standard in the home is going to be like four or six.
If you get the price point down a lot.
But they work so well in concert with each other.
It's almost a crime not to have that teamwork synergy.
Huh.
It seems like a bit much to me.
But in terms of, maybe I could like if I need to have movers, like, you know, I can ask my
Neo-Gama and he'll invite some friends over.
But you're not taking everything into account, Peter.
Because you have to remember that by the time you have
this many robots in your home, everyone's
homes are really freaking big.
Yeah. You have an abundance
of labor.
I mean, like,
your house is not going to be
this small. So labor
is going to continue to
demonetize and democratize.
Everybody, there's not a week that goes
by when I don't get the strangers
of compliments. Someone will stop me and say, Peter, you've got such nice skin. Honestly, I never
thought, especially at age 64, I'd be hearing anyone say that I have great skin. And honestly,
I can't take any credit. I use an amazing product called One Skin OS01 twice a day, every day.
The company was built by four brilliant PhD women who have identified a 10 amino acid peptide
that effectively reverses the age of your skin. I love it. And like I say, I use it every day,
twice a day. There you have it. That's my secret. You go to oneskin.com and write Peter at checkout for
discount on the same product I use. Okay, now back to the episode. Let's go someplace that I'd love
your insight on, which is China. So when I think about the robot industry, you know, I'm tracking
50 plus well-funded humanoid robot companies in different stages around the world. Majority are
U.S. and China. There's some in Europe. You started in Norway. There are some in India, parts in
Japan and Korea. But China by far, I think, is dominating. And what I see there with the robot
Olympics and special robot villages is pretty extraordinary where the Chinese government is really
accelerating this for obvious reasons. You know, they need access to
low-cost labor to continue
the manufacturing boom. They needed
for supporting their elderly population.
How do you think about China? What do you think
of the work coming out of China?
Well, first of all, I think we need the same thing here.
We don't kind of like realize it maybe as much,
but of course, we need the same thing.
I think the Chinese ecosystem is incredible.
I mean, I don't know anywhere else in the world
where you can go and develop harder as fast.
It's just, right, you need something and you go over and get a machine on the corner here, something broken in PCB, and you just, like, go over the street and, like, buy some new components.
There's someone, like, there's someone doing a reflow over on the street corner over there.
And, like, this is an incredible ecosystem that I think the bay, I say the bay now, but I know it's also like, I mean, the Silicon Valley, like, the hardware bay in Chensan area, it's also a bay.
But Silicon Valley Bay, this bay, we have a long way to go if we want to, like, really get to the same level.
of rapid iteration on hardware.
So that's just incredible.
I think the manufacturing part is incredible.
It's just so much process knowledge.
And I think this is highly underrated.
Like, you know, think about magnets.
I do.
We have, so do I a lot.
We have great material scientists that know how magnets work.
work and can design very good magnets but then we lack that guy that knows that yeah you do all
that stuff that they told you in the books but you know after two hours you have to stir to the left
not the right um like there's like there's just so much of that right yeah and this is just so
disseminated in china there's so much process knowledge how did that evolve in china and not here
what's the cause top down incentives just funding i i think it's it's the
government saying you're a robot city, you're a neodynium magnet city, you're just capital
and people and just, you know, communist directed, but then allowing companies to build on top
of that. I mean, is that what you see as well or not? I'm not sure. I think like the Chinese
startup community is very alive, right? And the capital.
is quite alive. And I felt it runs very similar to kind of like more of like what we
like to think of as like the Bay here. I mean, I used to take a group of investors every year to
China and we would go and visit Shenzhen and Shanghai and Hong Kong and Beijing and meet
with Baidu and Tencent and Huawei and the leadership of all these. And there was a super
vibrant entrepreneurial community. The mindset was 9-96. You'd work 9-9.6. You'd work
9 a.m. to 9 p.m. 6 days a week, and that was a great lifestyle. And you considered the 1.3 billion
people in China, your market, and the 300 million in America, your market as well. But there was a fall
off after 2019. And there was a real dip in that ecosystem. I think it's beginning to reemerge.
But I think the government is really pushing hard on supporting AI. And human-ed robots is a embodiment of
AI. It's it. They're there, obviously you know this. They're cleanly meshed. So I do think there's a
lot more support that the US government needs to give to US hardware companies. But I think like what
I wanted to say was just like I think the most likely the most genius thing that it was the
economic zones, like the free economic sounds. Sure. Like it's not that people here don't want
to build stuff and we want to build stuff. Yeah. It's just it takes too long and costs too much.
and it's too convoluted, right?
We do it in spite of the challenges.
Yeah, I think the US should just, like, spin up some free economic zones.
Like, here you have, like, expedited permitting.
Well, in California, in particular, it would be a no-brainer to do that.
That's the simplest best idea ever, but I don't know what it would take to get it through.
No, I mean, but this is some of, like, what people are working on these days, right?
Like, if you look at, like, Massa stream for Project Crystal Land, for example,
it's very similar to this kind of, like, a free economic zone in the U.S.
I think there's another problem we need to solve, too, though, which is that, you know, the U.S. test did software, software, software, software for, you know, forever.
And we were not only not doing hardware.
We just didn't do chips.
Well, not forever.
We're in Silicon Valley.
Yeah.
Yeah, where's the Silicon, like, there was a phase in between here where people kind of like they lost the way.
They lost the plot.
Yeah.
And now we have to find the call back.
Well, the venture community got all messed up, too, because they wouldn't find, if you had a physical component in your business plan, they'd like, well, I'm looking for the next meta.
or Google.
Hardware's not really care.
Like, yeah, hardware's hard.
Well, all I've been doing is I've been keeping this company
flows for 10 years.
So I can, yeah, I can go on and on.
You, yeah.
Like, do you go on and on because it needs to get fixed.
But how much people are afraid of hardware.
Yeah, right?
Yeah.
But it's going to kill us if we don't find a solution.
I think also it's going to like kill VC, to be honest.
Yeah.
I think people like, if you do get return or venture,
it used to be incredible, right?
If you got to be an LP in a venture,
you're like, oh man, I'm like set, right?
I'm going to make the big bucks.
And now it's 10 years and no return.
It's doppable.
And now it's more like a philanthropy thing.
You want to fund startup entrepreneurs because venture doesn't really make that much money.
And I think it has a lot to do with...
Except at Link.
You guys are doing amazing.
We are doing amazing.
That is good.
That's good.
You guys touch the hard stuff.
The point is if you don't touch the hard stuff, what's your mode?
We touch the early stuff, right?
So it's first checks into companies.
that then are scaling rapidly
versus companies that are doing...
But we're doing first checks
into hard stuff at the seed stage,
but we're not doing hardware.
So I'm as much part of the problem as we were talking.
Like, what if somebody came to me
with a seed stage,
hard physical device problem?
And we very rarely will fund that.
And that's a dysfunction.
Well, you should look at
what's the biggest companies?
They all have hardware.
Yeah, I mean, listen,
Elon cracked the code on that.
I mean, he's been able to just make hardware sexy
and has generated incredible returns.
Yeah, I think Jensen says it really well, right?
They want to work on the really hard problems.
They're super painful that you are uniquely capable of
because you know that your competitors have to go through the same pain or more.
They're not going to be willing to take as much pain as you.
This is how you win.
And things here are actually defensible, right?
But the moat we have on hardware, that's years.
The moat we have, I'm incredibly proud of our AI team, by the way, we've accomplished some
things that are so amazing on such a budget.
So we're way ahead of everyone else and what we're doing on world models.
So let's say we're three months ahead.
Right?
Because like way ahead.
And that is way ahead.
You're incredibly rare.
And all props to Elon.
He's incredible.
But Elon's pathway to getting too hardware was through.
Yeah, sure.
A couple hundred million dollars, burned it all himself, got down to near bankruptcy.
It was almost dead on both of his big, you know, Tesla and SpaceX, barely pulled it out and then made them huge.
But the VCs were not touching it.
Yeah.
Need to borrow money in 2008 on the, in a divorce with SpaceX having its third failure.
That is not a repeatable funding model for America.
You know, I was very lucky.
I had a very good, like, early founding investor.
I said, like, the company didn't start in a garage because we're no Silicon Valley.
We started in a barn because we were Norwegian.
And at some point, two years later, he sold the farms that we had to move.
He sold the farm to fund the company.
So you would, this company in Silicon Valley wouldn't exist without a Norwegian investor.
No.
So we wouldn't exist because we wouldn't, like, we wouldn't have had the runway, right?
Operating this in Norway was just incredibly cheap compared to operating.
So your initial Norwegian investor did he or she believe?
that they were going to make a huge amount of money?
Or did they do it because they're passionate about your vision and your mission?
Or are they believing in you?
Yeah.
I think it's all three.
All three.
Yeah.
It turned out pretty well.
Yeah.
But it wasn't here.
That's the point of me.
I mean, there are different phases, right?
If you want to scale something, you have to come here.
I think you can do deep research in other parts of the world.
There's talent everywhere.
But really kind of like hyper-scaling that and like getting it across the finish line.
that's here.
Did you consider LA, Austin, Florida versus here in Palo Alto?
Yeah, we even had manufacturing for a little time in Texas, in Dallas.
There's just, there's, there's something in the water, something.
Like the talent pool, it's a talent pool.
Yeah.
Like there's talent everywhere, but like the density of talent.
And, you know, there's different types of talent.
when you have like a zero to one field like this
in the beginning you have a lot of like really passionate people
they've been working on this all their life and they're so good
in this case like human robotics right
and I remember back in the day if you went to the humanoid conference
like everyone could fit around one's table
and those people are still the ones that are most of these companies right
and those people
they don't know how to make a great product
they don't know how to scale that to a million or a billion devices
they don't know how to write
the incredibly good APIs
for the software to support the ecosystem
they know this thing
and they do deep research
and now your field kind of comes of age
and it's time to actually do this
because the timing is right and we purposefully
stayed very small for the first
seven years just doing core technology
now suddenly you get access to this talent pool
of people that just go from field
to field that is the hottest thing right now
and just do it again and again and again and again.
And that's Silicon Valley, right?
But there's been an incredible inflection point in humanoid robotics.
I remember, you know, we had the Avatar XPRIZ, right?
Our A&A Avatar XPRIZ that had teams build robotic avatars that you could tell a presence.
And I remember the finals, we had good teams.
I know some of your team members here were parts of those teams.
But it's come, you know, a thousand X since then.
And really in the last five years, right?
Is it been the, is it been the AI models that have made that?
What's caused the inflection in the last five years?
The AI is clearly part of it.
There are things we do with AI now that we couldn't do it five years ago.
I do think we saw kind of like the bed red crumbs and we were like on the path already then,
but it wasn't kind of working yet.
and I think it's just you hit this kind of like critical mass
of like accumulation of like innovations that has happened in hardware
I do think it's important to note though that
it's hard to see what is like real innovation and not in any field
and especially in human or robotics so I want to just point out again
that you can go on YouTube and find things from the early 2000s
that look better than most things you see today
that human or robotics companies are doing so you can't just make a beautiful
robot that looks good. You have to actually make a robot that is safe, that you can actually
manufacture at scale for a very affordable price, and that still is capable, right? And I think
that's been a main unlock on the challenge. You need to get those things right. And that just
takes a lot of time. The neural nets are light years ahead of anything anyone would have predicted
five years ago. And then the hardware, the Nvidia chip that it runs on is getting pushed as fast
as any innovation in history because the demand is through the roof.
So that part is well understood.
On the physical hardware side, what's something that you used today
that you couldn't have used 10 years ago?
Yeah, what's improving in the motors,
in the harnesses, in the electronics, batteries?
Yeah, so I think mostly it's been on like the motors and materials side and side.
So we make our own motors, including not only the IP for the motor,
but also the manufacturing and automation for all this and everything that goes in.
Playchain is so broken.
Yeah, you literally make your own motor.
Like, wind the wires.
Yeah, so.
Holy crap.
We do it kind of special, the one-x version of this.
So, motors is one of the things we really innovated in.
And this is actually how I started, like, you know, when I sat down a decade ago,
the first thing I did was a design a different kind of motor.
Okay.
And the motors we have now in Neo, they are five and a half times the world record in Torque to it.
Wow.
And that's why we have something that's so powerful,
that we don't need gears,
we can just pull on these tendons
to loosely simulate
human muscles.
That's why the windens.
And that's why it's so light.
It's also why it's so light-bike driver-will-compliant.
It's why it's so cheap to manufacture.
Like, everything kind of comes from this.
Now, of course, when you have these motors,
then you can start using tendons.
But then you need to sink a lot of time
into figuring out how to use tendons.
And then comes all the material science
to have tendons that can last millions on millions and millions of cycles.
And these are really hard research problems, right?
They're not even engineering problems.
They're hard research problems.
And we spent so much time figuring all that out.
You can't make the motors that we make without doing some pretty significant innovations
in electronics and how you do power amplification and in general just motor drives.
So that kind of like, there's a lot of things that come together.
you couldn't have designed the motors we do today
without some of the innovations that had happened in magnetics
and of course you couldn't have done it without AI either
the first thing I did back in the day when I sat down was to
program a network to learn how to make motors
oh you're kidding you made them you designed the motors via AI
how long ago was that it's more than 10 years
wow okay I mean it wasn't transformers but it doesn't matter
yeah well yeah for that kind of
use case. But it was an neural net. Yeah, wow.
Burt, you think about robots in the world probably more than anybody else.
What's your vision 10 years from now? What are we seeing? What does abundance in labor enable
that goes beyond people's initial reaction to how I would use a robot?
I think, like, first of all, what will happen is actual abundance means everyone can
have whatever they want, but not only can you have whatever you want, you can have whatever
you want in a sustainable manner. Because sustainability is something we lose when you cut corners
to shape costs, right? If you actually have an abundance of energy and labor, why would you not
do things sustainably? And then I think the next frontier that comes after, just in general,
like building out the infrastructure across the globe that allows everyone to have an incredible
quality of life is how do we solve the remaining really hard problems in science?
And I think this is not going to happen without humanoids because you need to build particle
accelerators.
You need to build enormous biotech labs.
You're doing experiments.
You need to do all the experiments, right?
And also I think like it's kind of like almost existential to us for human happiness.
I don't want like the godlike AI in the sky to be directed.
all of the planet's inhabitants
around with their glasses to do experiments
for it to solve science. That's
not the future we're aiming for.
We want to have this beautiful
symbiosis and like co-invention
between man and machine.
Yeah, that particular use
case is so acute
where, you know, Dennis Sosavis is working
on the full cell simulator to try and
close the loop, but you know that you're going to need
people to mix a huge number of chemicals
to truly unlock longevity
and health and chemistry.
history. And, you know, the humanoid robots can do the work because everything in the
lab is not only can they do the work. I think this is a common misconception. Humanoid robots
will do a lot of the work initially. But once it gets to a certain scale, the human robot will
make the automation system that will do the work. Because humanoid robots will not be machining
new parts with a dermal, right? You will use the CNC machine. Humanoid robots will not be moving
car chassis around by like carrying it with 30 humanoids.
Generally, this does not make sense, right?
We have existing automation system and we will build more.
What humanoid will do for you is to build all these automation systems
and get them up and running and then cover the remaining gaps
that you currently today can't do with humans.
Yep.
Yep.
How are you going to do it in a vacuum?
I want my Neo-Gama to help me set up my space station or mine my asteroids.
I think, first of all, we have a huge advantage because the robot is so light.
And kind of like, I guess Elon's working on this, but payload to orbit is still expensive.
Secondly, most of the stuff we have actually works pretty well in space.
We have to do some stuff with the epoxy on the motors.
That's not going to be very vacuum hard.
If you want to train in zero-g, one of my companies is a company called Zero Gravity Corporation,
these parabolic flights.
Yep.
We flew Stephen Hawking in Zero-G, maybe Neo-Gama should come next.
That would be great.
And I actually do think it's like,
there's a real use cases for this.
And one thing is like building a base of Mars or whatever, right?
But even before we get there,
just in orbit assembly,
yes.
It's this extremely high value task.
And I think there actually we will use teleop.
And the reason I'm saying that is just like the cost of mistakes is so hard
that you want to use like the smartest,
most expert humans you have.
And until we get to super intelligence,
that will be a human.
And you have people in orbit,
you have robots outside,
very low latency,
and you can teleport in a very natural manner
as if it was your own body,
how to do all of these in-orbit assembly tasks.
And it can be incredibly complex
and you can still do them with very high accuracy
and you're not endangering people.
And of course, when you've done this for a while,
you have the data to automate all this,
which is very interesting.
Yeah.
Most of your weight advantage would be really amazing too
because you can take five, six, seven of these.
And the energy efficiency.
You must be, yeah.
You've got to have to somehow bleed off your
eat, right? Right. It's really hard. Yeah, that's right. You must be looking to hire people.
We are. What kind of people watching are you interested in potentially hiring?
People are just really mission driven that really believe in the beauty that will be a world
where we have an abundance of labor and like to solve really hard problems. People also can demonstrate
that they've solved incredibly hard problems
because that's what we're doing here, right?
Everything from material science all the way in the bottom
all the way up to the foundation models at the top.
And I think what we offer is just this incredible place to work.
Not with respect to work-life balance and all this,
we're not quite Chinese, but it's a hard problem and we're in it to win.
But probably the place on the planet
with the most experts across all different disciplines in science,
So if you come here as a mechanical engineer, you will learn so much about AI,
about electrical engineering, about batteries, about material science, everything else.
And like, it doesn't matter which discipline you come from, right?
You will learn so much from the people around you.
And I think also that's one of our biggest strengths.
How we really always work in is multidisciplinary groups.
And we find the good solutions between the disciplines where it's like, hey, you don't
need to do that.
That's kind of costly in manufacturing.
I can calibrate that away.
Or like, you don't need to calibrate.
that I can, this doesn't cost me more.
Yeah.
I can see that actually when you're walking around the building here.
You know, Dean Kamen's lab in New Hampshire is very, very similar where you use the Segway
inventor and everybody's just happy.
Like all the MIT people that we know that work with him, they're just happy.
And the reason is because when you do software, you're largely behind a workstation all day,
you're setting whatever.
But when you're doing physical things, you're moving around a lot more.
And you're building and making and it's just, it energizes you all day long.
It's just such a fun work environment around.
It's so obviously tangible, just walking around and talking to people.
So it's a good, it's a good lifestyle.
And it helps when there's a lot of robots walking around with you.
Yeah, for sure.
And people go to 1X technologies as the website to go and find out what positions are open.
Yeah.
Yeah.
And follow us on X and you'll learn more about us.
Pretty active there.
For sure.
And one thing I'm excited about to announce is you and Dar and then.
neogammas are going to be at the Abundance Summit in March.
Yeah, can't wait.
Yeah.
Need a lot of great people.
Yeah.
So our theme this year is the rise of, is digital superintelligence and the rise of
humanoid robots because the two are going together.
Sounds pretty spot on.
Yeah.
I think so.
I mean, it really is.
It really is.
And without making any promises, I'm hopeful we'll have a number of the neogamas there, sort of
like interacting and sort of living and.
hanging out with the abundance members, yeah.
How do they get there?
You buy them an airplane seat?
They just walk on.
Yeah, you don't box them up, do you?
Yeah, we're down in L.A.
Yeah, we're probably going to drive down to L.A.
It's easier than getting them on a plane.
Do you put them in the seats and strap them in?
Yeah, we do.
Actually, at this point, they're starting to sit into the seat themselves.
So it doesn't strap itself in yet, but that's coming.
It's an interesting story.
It's a funny story here, though, because we put one of the first robots on a plane back in the day.
We were rushing back home from China.
It was a proper startup story where we were like, it's way back in the day.
We were running out of money and we hadn't kind of like gotten to where the product was good enough to raise more money.
So I took the entire team and we went to China and we lived in a hotel for five weeks,
the signing and manufacturing kind of like as we go or no, the sign until late and the night.
And the morning you walked down to the machine shop, you help get them some information,
you get some new parts back and we just kept like iterating on this.
electronics market, it's magical, right?
And then we have to go back, and we're just like,
okay, we rush back on the plane to meet some investors.
So we check, we take the robot and we fold it up, right?
And we put it in a briefcase.
And then when it goes through the scat,
and you can see the guy just goes all white,
and he's like shaking in his hands when he's opening the bag.
And we're like, no, no, it's just a robot.
And he's like, yeah, it's a robot.
that's hilarious
that's awesome
well yeah
I really really thrilled
I loved your TED talk
and excited to have
but Neo Gamma
there hanging out
with all our abundance members
and hopefully
you'll be ready
to make some
some sell some robots
so in the early days
of making them to the home
no promises
but you're going to have
sort of an application process
to get
to get the robots in
and start to build data
assets. When
do you think you'll be ready
to take pre-orders and orders
for Neo-Gama?
I'm going to be kind to my team and I'll say a specific date.
But it is happening this year.
Okay. It's this year. This year, 2025.
2025. Yeah.
Now, we're going to talk a lot about this
in the pre-order, but
the most important thing we do here is expectation
manage. This is incredibly early, right?
Yeah. And what you're
buying here is kind of a ticket to be part of
this transformation. Adopt a
Neo into your family, help us
teach it. It's going to be a lot of fun.
It's going to be useful. I love that framing.
That's perfect. It's going to be useful,
but it's not going to be perfect. It's going to be a lot of
rough edges. And we're going to
treat you really well. We're going to figure it out together.
It's going to be an incredibly fun journey.
And that's kind of
like the early adopter program that we're
launching this year. Yeah. You're going to have a long
waiting list.
We need millions and millions of these.
We need to get the price point down.
And then, I mean, when you think about the constraints to human happiness globally,
a lot of them are going to be solved through regular AI,
but another big chunk, most of them are related to houses and food and physical happiness.
Give the jobs that are dull, dangerous, and dirty to the robots.
And then create a lot more of the things that make people happy,
the parks and the homes and, you know, all of that for bigger homes.
and better things to play with.
It's all constrained by that inability to manufacture
through the lack of the humanoid, you know.
Let me ask you a numbers question.
So I interviewed Elon at FI Summit.
You're going to be there in October as well.
And also Brett Adcock.
And they both gave a number around 10 billion humanoid robots
by 2040.
Do you believe that number?
10 billion by 2040.
Yeah.
I think it's probably roughly correct
think it might happen before. I think it really comes down to what kind of like artificial
constraints we put on how we scale. At that point you have to actually really think about like
how are you refining rare earths? How are you mining more aluminum? How are you ensuring that you
get your labor bootstrapped really well with robots into labor? How do you build out the power
infrastructure? We need more chip fans, by the way. We're not going to be able to build
10 billion humanoids
without way more chip tabs
We can help with having robots build this out
But I do think
That timeline depends a lot
On how permitting processes go
And like how much
We kind of allow ourselves to scale
And how fast
But I do hope we get there
Yeah I mean for reference
There's like a billion automobiles on the planet
You know
You would think there's more
but I don't know how many, sure how many iPhones are,
well, they're on the order of 8 billion smartphones on the planet.
I'm really glad you said what you just said, though,
because the numbers are so wildly out of balance.
Each one of these robots uses a full GPU.
It could probably use two.
And if you're talking about a billion of them by 2040,
we're only making 20 million GPUs a year.
And then TSM has 66% market share now in the fab.
So they have literally one point of failure for the entire,
economy that we're trying to build and so we're desperately short on the fabs and that's if you
just go one layer deep yeah like look at asML behind it sure right right like the supply chain
for chip fabs that's even more brittle yep i'm really surprised that we're not moving much faster
given that Elon is right in the middle of it Elon is or was in Washington that we're just letting this
bottleneck well how how long are we been talking about magnets
How long have we been talking about magnets?
We've been talking about magnets for a long time, right?
That's the problem that like only China can really make high-grade magnets.
Yeah, yeah, yeah.
And it's not just the rare earth.
It's the process to produce.
Yeah, yeah.
And I think now finally, like, people are opening their eyes and like, wait a minute, this is actually a real problem.
We mean with a lot of government officials and they're completely unaware of these bottlenecks.
And it's funny, if you point them out, there's still no reaction.
yeah it's like but it's it's so acute and so urgent um it's i mean you're you're in a perfect
position to actually identify those bottlenecks so it's really great that you set it on this podcast
because then we can take that material and say look look he would know you know like this this is
what we need this is going to be a crisis very quickly yeah
make your own thank you for the tour today thank you for the work that you're doing
super grateful uh excited to have you at the abundant summit with you
your with your team of robots.
If you're interested, it's Abundance360.com, check it out.
Again, it's onex Technologies.com to come and learn about the positions here and
following you on X.
Great even easier, onex.tech.
Onex.com.
And by the way, the reason you named the company OneX, I think that's worth closing on
as the story here.
Well, you know, there's all of these videos on YouTube.
Yeah.
Robots.
Yes.
And there's always like this eight.
X or 4X in the corner.
And all we do is real time because we build proper robots.
There you got it.
What you're seeing is real 1X speed.
And we had fun today with Neo Gamma.
And it's also amazing because if you apply for our careers and you come here,
you get to be a 1X engineer.
Okay.
Well, a real pleasure, our friend.
Awesome.
The things I get to do because of this podcast.
We're having fun.
Oh, my good.
Yeah.
Awesome.
Every week, my team and I study the top 10 technology metatrends that will transform industries over the decade ahead.
I cover trends ranging from humanoid robotics, AGI, and quantum computing to transport, energy, longevity, and more.
There's no fluff.
Only the most important stuff that matters, that impacts our lives, our companies, and our careers.
If you want me to share these metatrends with you, I writing a newsletter twice a week, sending it out as a short two-minute read via email.
And if you want to discover the most important metatrends, 10 years before.
anyone else, this reports for you. Readers include founders and CEOs from the world's most
disruptive companies and entrepreneurs building the world's most disruptive tech. It's not for you
if you don't want to be informed about what's coming, why it matters, and how you can benefit from
it. To subscribe for free, go to Demandis.com slash Metatrends to gain access to the trends
10 years before anyone else. All right, now back to this episode.
Thank you.