Moonshots with Peter Diamandis - Brett Adcock: Humanoid Run on Neural Net, Autonomous Manufacturing, $50T Market #229
Episode Date: February 11, 2026Peter & Dave sit down with Brett Adcock to discuss the future of Figure and Humanoid Robots. Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Bre...tt Adcock is the founder of Figure, an AI robotics company developing general-purpose humanoid robots. Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Dave Blundin is the founder & GP of Link Ventures – My companies: Apply to mine and Dave’s new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy _ Connect with Brett: X Website: https://www.brettadcock.com/ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Listen to MOONSHOTS: Apple YouTube Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
I am blown away by how far you've come.
The things that you can do with neural nets now
is like completely blow my mind.
Every year to year, the whole business looks completely different.
It's amazing to me how you accumulate data
and the data becomes this incredible barrier to entry,
this incredible asset.
The one thing that's important here is that
once one robot learns how to do a task,
every robot the fleet knows it.
Humans don't operate like this.
When do we start seeing robots building robots?
We will put robots on our Bocu lines this year.
Listen, this is like going to be the largest economy in the world.
It's going to be a super impactful business.
It'll lead to, like, ubiquitous goods and services for anybody in age of abundance.
And it's going to be a super fun business, too.
It's like going to build a sci-fi future we all want.
What you're seeing is every major group in the world will get in this space.
You have to.
You have, like, no choice.
When are we going to see the first figure in the customer's home?
Mine, the best guess is, I think.
So Dave and I are in San Jose at Figure Headquarters.
We just did a podcast with our friend Brett Adcock, Extraordinary.
And check it out.
Check it out.
So I'll be here.
Yeah, figure one.
This is the original.
Yeah.
Still somewhat functional.
Yeah.
It ran the first large language model, the first neural net.
They built it in under a year.
Brett actually was screwing these things together himself.
And it was all about gathering telemetric data so they could build this.
Here's figure two, much more beautiful, much more functional running neural nets across the board,
dumping all the C++ plus.
Can you do, you live long and prosper?
But I.
But I.
And here we go with Figure 3 is the workhorse right now.
We just did a tour.
I mean, probably, you know, saw a hundred of these walking through the hallways, on test stands, cleaning dishes.
Brett would have thought it added a flexible toe, too, so it can go down like this.
And before it had just this clunky, clunky foot here.
And Figure 3 has the pumpkin.
camera.
Almkamp?
Yeah.
They cut about 30 pounds off the weight and 90% of the cost.
Of the cost.
Wow.
Crazy.
Yeah.
Amazing.
Yeah.
It's the perfect height between the two of us.
Welcome to Moonshots, everybody.
I'm here at figure headquarters with Brett Adcock and DB2.
Brett, it's been, it's been about 18 months since we did a podcast on Moonshots together.
And I am blown away.
by how far you've come.
18 months and AI time.
That's like a decade.
Welcome to figure headquarters.
What do you think?
Yeah.
It's extraordinary.
I mean, just to describe, we just went on a tour.
You've got about 300,000 square feet,
400,000 square feet under development here.
I mean, there are figure three robots walking down the halls.
There's fully autonomous robots, I guess, running Helix 2.
You just released Helix 2 today.
Today.
I got it while I was flying up here.
We have these robots.
doing everything from kitchen tasks to packages to manufacturing of different type.
I mean, how many robots do you think we saw?
Seriously.
I wasn't counting as we hundreds, maybe not a thousand.
Yeah.
At least at least a hundred or so.
Yeah.
Well, there's a lot of partial robots out there too.
It's hard to.
Picking up figure heads.
How many hands do you think we saw?
There's more many more hands on the horrible.
The handline, the headline, the torso line.
Actually, picking up the head was the most surreal.
This is where the pelvis is.
made, yeah, for sure.
Yes.
Pretty amazing.
You know, I still remember during my first visit with you, you know, full disclosure,
my venture fund is invested in two of your earlier rounds, super proud of the progress
that you've made.
I still remember your figure one putting a curing cup in a coffee maker.
And that was a big deal because it was done with neural nets and not C++ plus.
And that honestly was like, I think it was a big inflection point for us.
I feel like the, you know, I think a few things we need to really run down is can you build an electric humanoid?
It's like low cost, it's capable like a human, like just the hardware side of things.
The second thing is, can you figure out a way to not code your way out of this problem?
How do we use a neural net to learn those like human type representations and the new tasks?
And when we are doing the Kyrig task, it was a basically bi-manual neural net running on the robot, which is now like evolved into helix.
And I was able to basically do the whole kind of like, you know, it was a smaller task.
It was a few minutes longer of like, you know, like picking up the cured cup, like opening
the coffee, put it in, running it.
And it was the first time we saw like true instance of kind of like neural nets where
you're working on a, you know, a bimmanuel humanoid robot.
Yeah.
And that was when we were like, okay, we have to just go all in on neural nets.
The whole stackings of neural nets to make this work.
And that started like, and that was basically two years ago now.
And then you guys saw Helix 2 today.
which is like the, basically like the best release we've ever had.
So we'll run a clip of Helix 2 while we're describing it because what we saw was figure
three running Helix 2 in full autonomy, going into the dishwasher, picking stuff up,
putting it away, not pre-programmed.
And I loved the human elements of it, like using its hip to close something and it's put
to raise the dishwasher.
That's the neural net difference, though.
You get, you get unexpected behavior, you know, both good and bad, but things you could never code up.
You could never.
Like your career went, the software company, VTol company now, this got to be the first neural net platform.
Yeah.
The, like, we wouldn't, like, the things that you can do with neural nets now just, like, completely blow my mind versus code.
Like, we could never have done a quarter of the stuff that you saw today with the whole body, with manipulation, with things that, you know, there's only so far you can really push, like, code is.
heuristics and no human on a robot. It's just a dead end. Yeah. It's just not going to work.
Yeah. Yeah. Yeah. It's amazing to me how you accumulate data and the data becomes this incredible
barrier to entry, this incredible asset. If you were writing all this in C code, that C code would be,
you'd have millions, hundreds of millions of dollars invested. You would not want to mess it up.
With the neural net, you can say, look, hey, guys, retrain it from scratch. Yeah. Right off the road.
It's just a completely different approach. And that's why people are way under predicting how
important or how quickly this is going to evolve. Because it's a completely different paradigm.
Well, we've, like, lived through it. I mean, like, we, you know, I think it maybe a year or two ago, we had, like, few, like, several hundred thousand lines of C++ code.
Several hundred thousand. Yeah. Yeah. Probably a hundred bucks a line to write it.
Yeah. Very expensive. Very hard to, like, test and, like, get out reliably.
Yeah. And, like, also hard to, like, model all the different behaviors that we would need to test the, like, the, like, the screen. Yeah. And then, you know, we removed a majority of all that in the Helix 1, where we still had a lot of, like, lower body control.
being run in basically the control stack in C++.
And then today we removed the remaining 109,000 lines of C++.
All neural nets today.
That's the full body.
And that took it from being able to do really good tabletop manipulation.
Like you saw the curate coffee, the work we do with logistics, like all that's
to be done in neural nets.
We've been showing like amazing progress there.
But getting the whole body to get out of there and move dynamically to a scene while
manipulating and planning is just a whole other like we basically spent like a greater part of a
year refactoring the helix architecture to be able to enable us to work you're talking about
now like moving through space like a human yeah having like control of the full body like all
eye hand foot leg coordination everything is sensor data in cameras tactile we have camera palm
cameras yeah basically doing inference on board the robot fully embedded and then be able to output
torques into the motors and do that, you know, a few hundred hertz, you know, in terms of like,
no, that planning and control and do that reliably on very difficult tasks.
Like, these are bi-manual tasks where it's grabbing and holding things, planning, moving the body,
getting things out of the way, making like errors and replanting and fixing this.
All done with the neural net now end to end over like a pretty long, like for us, it's like,
you know, it's kind of like a room scale autonomy.
So we can like now like finish the whole room.
which is important, and next we're going to graduate to the piece of the full house.
That's one of the things really obvious when you're walking around, looking at what everybody's doing and working on you.
You visualize a robot company having lots of people working on microcode or actuators or batteries or whatever,
but there's just a huge number of people out there at workstations.
They must be working on the neural nets.
It's just got to be such a dominant part of what makes the thing actually look and feel human.
And, you know, the motions are so smooth.
And, you know, everybody, when they think about the history of robotics, they kind of chart these line charts.
But it's not, it's not like that. It's a disruptive change from dropping that last 100,000, 105,000 lines of C code to moving to an all self-organizing neural approach.
Yeah.
Completely different future.
It is like we make these like technology, like progress steps.
And I think it's been very apparent here.
Like every year to year, the whole business looks completely different.
Yeah.
In large part of trying to get the hardware, hands, like all this stuff in a good spot.
And then, you know, be able to, like, be, basically have, like, more range of motion and speed and torques, like, human.
And then be able to get, like, you know, we're all in on neural nets.
So it's like, you know, what is the right data set for that for pre-training and post-training?
Do we have the right, you know, training cluster?
Do we have the right models?
And then deploying those really well on the same human hardware.
That's, like, a full loop.
And we've actually designed figure three to, to, if you say, like, what is the guiding principle of figure three more than anything else?
It was just designing for Helix.
How do you design this to run like he looks on?
It's like, how do we give he looks a box?
So counterintuitive.
Everything.
Just the feet hands head.
So we built around the neural net.
We built the neural net and we said, how do we like fit like this into a humanoid robot
and what are the best sensors?
How should it run?
What does operating system look like?
Middleware, firmware, embedded software, like all of it is encapsulated in this
view that we need to go all in on neural nets and do human-like work.
About to release the 2026 version of my humanoid's Metatrend report.
It's a deep dive looking at.
at 100 different robots in development right now, a deep dive into 10 of them, including figure,
150 pages. You can check it out at Substack for my paid subscribers. Anyway, super pumped. This is a field
that's moving at exponential, hyper exponential speeds. So in the beginning, you had partnered actually
with Open AI on software. And you made a departure from Open AI. And I mean, I guess are any
quite accurate, but like, okay, well, you can, you can, you know, correct it. I mean, I think, you know,
I met Sam and opening an eye team. They were just really interesting getting into robotics. Yeah.
And it was like in their early like, you know, master plan to get into like basically shipping home robots.
And they, you know, really wanted to kind of like, you know, basically, uh, basically work on a very intimate, like relationship.
They ended up leading our, like, co-leading our series B along with Microsoft. And we started working on basically like a collaborative.
agreement to help like work on next generation models for humanoids.
And,
you know,
we were like super like big then and we still are like how do we like language
condition the whole stack.
How do we use like an LLM in like a lot of ways is just like this like world model.
I really understands like in the weights like basically like what things are,
what it should do has a lot of like good semantic understanding.
Yeah.
We're trying to like how do we tap that for the humanoid?
How does like how do we learn from this?
Yeah.
I scale in some of those representations.
And it just like the partnership just.
work. Our team just ran circles around them. Yeah. For basically a better part of a year. And it just
gave me to a point where like it just made sense to just which we were, we were just doing all the
work ourselves internally. We had a whole team here a lot from like some of the best labs in the world.
And we were putting out like work after work, the cure coffee stuff was done by us. All this stuff
was done internally. Yeah. And at some point just didn't make sense to train other folks on how we
basically built AI models internally for bedded systems like a humanoid. Did it turn out that,
that a LLM matters at all in physical?
Like,
you could start with an open source LLM and too much.
Which is like a VLA,
like a vision and language action model at your point?
Yeah, basically like,
I think the LM is definitely a certain piece of this.
Like we basically want to like,
like the semantic grounding than like a like a VLM.
Yeah, like the common sense.
Yeah.
Like what we like understand from this.
So, you know,
which we have in, you know, the helix today,
super critical.
But like getting to a point where we can,
understand physics in the robot and have it like really be able to plan and reason at fast dynamic speeds
was something that nobody in the world's ever really done before.
And I think that that's the work that we, I think, have been excelling at and more love.
It's just like, how do we get it to understand physics?
I think most of our audience probably knows this, but just to rewind the tape, you know, the LLMs, GBG2, GBT3,
built entirely on text data scraped right off the internet.
And then they supplemented that with a ton of other data also in text form.
And that creates this machine that has tremendous amounts of common sense.
And if you ask it, hey, do you know how to play soccer?
It says, yeah, of course I do.
But then you try and install it in an actual physical moving machine and has no idea what it's actually doing.
Yeah, I mean, like, we seem to like touch everything in the world.
Yeah.
And we have this like really high dimensional robot that has like, you know, 40 plus degrees of freedom.
So like, you know, on the surface area, just the math around this is like the dimensionality is basically really high.
So you have like 40 motors.
they all can spend 360 degrees.
Yeah.
So the amount of states the robot can be in like positions is like 360 to the power of 40.
So there's more states of the humanoid than atoms in a universe.
That's a lot.
So you're not going to stimulate those one by one.
Yeah, exactly.
So the question is like they don't need to like understand these fine contact dynamics of like I need to grab this water bottle.
Like where do I position my elbow, pelvis, like torso, head, like fingertips.
How do I plan the, you know, to grab this?
You know, and then how do I put pressures on there?
I understand those representations really well, you know, from observations now into actions that I'm doing at test time.
And this is not an L.N.
Yeah.
The L.M. knows none of this.
Yeah.
The LN knows this is a water bottle.
And they probably knows, like, I need to grab it from the side.
And then, but like all this like, like, you know, like all this implied physics that we need to do here, just we have to go train models to go do that.
It's actually kind of weird because it thinks it knows how to do it, too.
You know, the LLMs feel like they can do things.
you know, intuitively, and then they completely fail.
I mean, you can, we've done this.
You can zero shot the LMs inside a robot.
We do it.
We still do it actively.
They just can't do anything.
Just for fun, just to watch them fall.
Yeah, I'm like, you know, I'm kind of interesting.
Like, probably I've been doing it is like, can you just like,
the other day I was like, can I zero shot?
Like, I'm working on this new AI lab that I founded recently called Hark.
And we have this new AI model here that are just like, it's just completely incredible.
Wait, wait, you founded a new AI.
Well, rewind the tape here.
What?
Yeah.
It's all the art.
a new AI lab.
I sent you this.
Did you?
I did.
Yeah.
I'll send you again.
And we have some new AI models and we actually put one of them into the figure
robot like this month.
And I was like, okay, let's just zero shot.
Let's give the LM, you know, let's give the model.
This is like a multimodal model.
Let's give this access to just like basic commands, like basic X, Y,
like basically can we give it like acceleration and X, Y coordinates for navigation?
like basically a joystick.
We could get a digital joystick.
Yeah.
And I asked it to like find the exit sign and just like go to get it out of the building.
And it just,
and unfortunately it was going the right direction and ran into like a clear glass wall.
Well, kids do that too.
Yeah.
Exactly.
So we've like,
we've really stressed this.
It's,
it just doesn't work.
Like you're missing so much like world understanding of like what's really happening.
How do we move my body?
Like we're thinking like, you know,
it's pretty simple to grab an object maybe with a stationary
or a robot. The robot we have for hum neurons are moving. The pelvis and head and torso and hands and
arms, like when you're reaching out to grab some over table, your pelvis is moving backwards.
Like, it's like, it's like very, very difficult to command like very high-demeanor robot.
Robotic physiology. You know, out of China this year, some of the government employees said we've
got a robot bubble. I don't know if you saw that article that came out. We have 150 plus robot
companies in China. And I mean, there's a lot going on there. Yeah. You know, in the U.S., I would say
maybe there's 10 serious players. I mean, two or three who are extremely serious, including
figure. But there's a lot of potential humanoid robot companies. I was just at CES and saw,
you know, I mean, it was a humanoid explosion. Yeah. And then as many or more hand companies,
which is interesting. So I go back to sort of the early 1900s when there were like 250 car
companies and like two or three hundred tire companies. And then this massive consolidation occurs.
And GM and Chrysler and Ford sort of buy and consolidate. What do you think is going to happen
with all the robot companies today? I think it happens in every industry like this,
especially in deep tech. This will all consolidate down to a few groups globally.
Do you have a guest? Is it a triopoly? That's the right description. Is it? Is it a,
you know, more than 10, less than 10.
Far less than 10.
Far less than 10.
Yeah.
Globally.
Globally.
It always seems in the U.S. anyway to settle down to two, three, or four.
But the borders are not obvious.
Like with cars, cars, cars, cars, right?
And actually, you had cars and trucks, and those were kind of separate for a while.
Well, you also have different designs.
Like, I want the plush interior.
Right.
I want the sportsster.
I mean, and that, I wonder, is it going to be, are robots going to be differentiated
by their vertical application?
their personality.
There's so much more variety possible in robotics.
Yeah, I think everybody's just like taking for granted how difficult this is.
Yeah.
Like this is like you have to go out and build like basically like pretty novel,
very, very difficult hardware.
Yeah.
It needs to be relatively cheap.
Then you've got to figure out how to make neural nuts work on it.
And then you've got to make neural nuts work on at scale.
And then you've got to manufacture at scale.
And then you're going to get these products out reliably that all work every day
without any human intervention.
Yeah.
And, you know, I think we talked a lot about this like how we're doing like K-cup coffee work.
Like, I haven't seen any single human world do that or able to do that today globally, and that's been two years.
Yeah.
I mean, by the way, a lot of the video we see is actually teleoperations.
I think, I wonder if people will realize that.
A lot of the robot companies are teleoperated versus fully autonomous.
So what we saw just walking around here was a four-minute-long, fully autonomous operation on KILICS do, right?
I've never, like, you know, I've built a lot of businesses in my day.
I've never seen so many companies with a human in the back commanding the robot and putting out updates in my life.
I've just never seen it.
I've like, you know, when I started first started to figure, stuff was coming out.
But now it's like every week is somebody just teleoperating a robot and putting on a video.
And it's just, it'd be the equivalent of like I have a self-driving car company and there's a guy in Tennessee driving it.
And we're like marketing as like there's no humans in it and self-driving.
We're putting out teasers.
we're in a lot of cases now those companies selling the service like um so i think like uh i mean
if you want to do this right you you got to you got to believe in neural nets all the way down
the stack you got to basically build for general purpose uh purposeness so the parameters that make
it's going to define this success successful top two three four uh the neural nets manufacturing
okay i would say like i would say what's impressive today is not manufacturing you could probably
solve, you know, we're pushing on manufacturing hard, but you can probably solve general robotics
with 100 robots.
What's impressive is like a full end-to-end robot that is generalizing to an unseen place,
like you can drop it into an Airbnb and be able to do long horizon work with neural nuts.
Any long horizon work in unseen places.
What do you define as long horizon?
Hours, days?
I would like to see days of work.
Yeah.
Full autonomous days of work.
And at least, at the very least.
And we're like so far from that.
You have like robots out there doing like karate and jumping, which is like,
these are like pre-programmed open loop behaviors.
They're not impressive.
We do that.
We've done that stuff here.
Like, you know what I mean?
We've done like the like the open loop behaviors.
It's just there's just like, you know, any college kid in the dorm room can do this with a,
with a robot.
And yeah.
So I think like that plus teleoperation, teleoperation is not impressive.
You could build a shitty hardware and still teleoperate it and put out videos.
That is not hard.
What's hard is to do full end-to-end mural network in unseen places or generalized to this.
And then if you can solve that, then the next step is like, how do you get that at solid-scale?
But we are still in the like who can solve general robotics phase of the humanoid phase.
And it's just not impressive if I can build 100,000 robots right now that like need teleoperation or can just only do open-lip replay.
It's just not like not cool.
Like we we if we like write your only job is to build 100,000 robots right now.
Yeah.
We have the capital to do it and we can do it.
But like what we really want to solve is like I can I can give you 10 robots and they can go into insane places and do real useful work.
Like that's what's going to differentiate.
So iterate that until it's right and then mass produce.
Yeah.
You basically want to bring bringing up mass production in parallel because like building like high rate manufacturing for humanoid is going to be super hard.
And you're going to like have to go through like a lot of iterative design process.
So that's what we're doing now.
We're bringing up higher volume manufacturing as we're like learning how to build.
build in true general purposeness.
But my view is, like, if you think about these, like, these, like, level bosses that happen
that will, like, that were, like, that were hurt, like, you need to graduate to.
You need to graduate to doing, like, you know, first, very short periods of, like, neural
network, which we haven't seen a lot of in the world today.
I don't think there's anything over a minute long in the world that's doing neural nets
continuously today.
And human research.
That's amazing.
Everything's cut.
All the films are cut or teleoperated.
Yeah, yeah.
It's pretty crazy.
I'm really glad you're telling us that.
Yeah.
I mean, like, you watch any video.
You want to see it uncut.
You want to see down with neural nets, like not teleoperated.
And then you want to see stuff we showed you here in person today that are like,
that are running for hours and hours.
And just like, we run these robots with neural nets.
I mean, the kung fu videos, whether they're teleoperated or fully autonomous, are actually
fascinating and scary when you see them doing that.
But the technology around there is not great.
I mean, you're basically putting somebody in a mocap suit.
You're having some guy like do karate chops or walking around.
Right.
And then you're running that open loop.
You mean, you're running that blind.
You're just hitting a replay button.
Right.
And you can do that with a very simple.
like RL neural net.
Like you can basically do D-MIMIC on this.
And it's super simple.
Like there's open source code for this.
You can do with like basically one GPU on your desktop.
And you can do with any robot.
And every robot has a very tiny amount of computer.
These are like,
these are a single million parameter models are very small.
You don't need a lot of memory.
And they're very simple to execute.
What you really want is good closed loop of control.
Whereas reasoning like over like 200 hertz or 200 times a second.
Sure.
And it's dynamically responding to the scene.
Yeah.
And that is literally,
a million times, 100,000 times harder
than doing open loop.
What about the human sort of cycle time is?
I mean, it hurts.
Oh, God.
Much lower than that.
Yeah.
I would imagine.
One thing we've seen a robot is we can like balance on one leg, like better than a human.
Yeah.
We just have like much better, like, faster, like dynamics.
Can we talk about the speed of development here?
So 2025, I'm just trying to imagine, and you put out this beautiful, you know,
post every week on, on X about the progress.
the robotics field and what and what's going on here at figure and it's just constantly you know locomotion was a big
a big step forward excuse the pun for for a figure just seeing it walk and then run very naturally what else was
was significant 2025 for you i mean we launched helix in 2025 about this time last year about a year
and now i think it was like highly significant like we basically figured out how to run like basically
like longer, like over long periods of time, neural networks on a robot.
How do we get the data for it?
How do we train models?
How do we deploy to test time?
How do we get to like, you watch like package logistics?
I think you guys saw it.
Yeah.
It's like it's been running for days now.
And it's just like it's a neural net all the way down the stack.
It's learning how to grab packages, kind of like, you know, individualize them, find the
barcode, position it down.
It'll even pat the package down so the barcode reader below can see it and scan it.
And it's doing that at a very high, like, assume that it's a high, like, assume that
high, like, accuracy and it's doing that at high speed.
High speed, though. That's the part that's...
Yeah, no, it's visually...
Well, because a lot of what you see in robotics is as fast as a human would do.
Yeah.
We had, like, our last...
Like, we see air now.
Like, last one air, like, last one we did, we did, like, we had one over 67 hours.
It's continuous.
Over 67 hours.
This thing is...
Crazy.
Yeah, it's doing an operation every second or two.
So, 67 consecutive hours of that is a lot of...
So if you had to guess at...
So I would say, I would say helix, like, it's a big one.
And then figure three.
So, figure three is a huge depth change for us in hardware.
If I could, what do you see then going in 2026 here?
We got the next 11 and a half months.
Yeah.
What are you excited about?
We will build like our entire roadmap around Helix 2 now.
We will basically now Helix 2 can like go from like doing the logistics use case stationary
to walking and moving and basically do like long horizon full body control.
So that means the rope.
And then we basically have now integrated all the sensors, tactile camera, palm into the stack.
And we're seeing like improvements overall in the policy layer.
So we're getting better and fast.
about like basically taking data and basically running it on board the robot now.
So I wanted to ask you like what defines Helix 2?
Because you're probably incrementally improving the neural net every day.
Yeah.
So what is the couple big steps.
One is we basically have integrated, basically a fully learned what we call like system
zero, which is our controller into the robot.
So the robot has a full body reinforcement learner controller in it.
Okay.
So basically now we have like we have like literally no code written on that robot.
So it can like it can basically move the whole body itself using a full like.
basically like learned controller inside of,
inside of Helix.
We call it S-0.
Is anyone else ever done that before?
That's got to be...
There are reinforcement learned controllers out there.
Like a lot of the karate stuff you see and things like that are that,
but nobody's not only integrated that in the whole body
for learned manipulation and perception.
I mean, nobody's showed that actually working with like moving around
and doing things that we shalt today.
Yeah.
I actually don't even know if anybody showed it stationary,
standing and doing learned policies.
Actually, probably not in the world.
So like, do you integrate into a stack now that we actually use going forward?
I think one of the things we learned that like we were in BMW last year and we were there for like we did six months like redeployed our figure two robots every single day.
Yeah.
The biggest thing we learned there is like the stack we had, I think about 80% of the things we got right and 20% of the things we got wrong.
Meaning like the things that we got wrong on, we didn't want to scale.
It was working.
The robot ran every single, every single workday and we it worked.
But we learned like, okay, I don't want to ship 100,000 robots in this like architecture stack.
It's just like too hard to scale.
I'd be like two brute force.
Yep.
And so we basically worked on basically for almost basically a year now on like,
okay, what is the idea of architecture where we can go out and accumulate large sets of pre-training data?
Yep.
Put it in the robot and it can just like do this, do this work.
And we emerge generalization from this.
And that's what you're seeing today.
So Eilix 1 had the C code in it still.
So what defines Eilix 2 is.
HeLULU1 had a lower body controller that was still written in C++.
And everything else, whole upper body was full neural nets.
Okay.
And so we basically completed now the full body.
Okay.
And in doing so, we also did some work on our system level, a system one level where we integrated
all the sensor modalities now from the hands and the rest of the robot into the stack.
So like, for example, we now have tactile sensors in every fingertip that we're using
on figure three as well as palm cameras to understand how we're like, we're sometimes occluded
and sometimes we want to basically better understand how we're grasping items.
So we put a bunch of stuff about we're picking pills and stuff out of pill cartridges
that you like literally are occluded from from the hand.
your hands like little in front of the head camera, but we still really want to understand
where we're going.
So I think, so now with Helix 2, we basically have a full stack in Dan with neural nets.
And we feel, we can, we feel confident scaling the pre-training dataset into Helix 2.
Oh, even go as far as like we've designed Helix 2 for the pre-training data set.
And then we've designed and then we designed the robot for Helix 2.
So we've like, we've designed everything around data.
Yeah.
And how do we get data at scale?
If you're in the neural net game, it's like a data play.
It's like how high quality and diverse.
So it's experience.
It's just gathered in the field in all kinds of circumstances.
It's weird in that.
I know everybody knows this already, but it's accumulating and it never goes away.
It's incredible.
Unique data.
Well, yeah, you teach somebody how to scuba dive or how to play piano.
And they have that knowledge.
They live.
Then they die.
Then you have to teach somebody else.
This is completely accumulating.
The reason why I think it would be like a very few humanoid groups
The one thing that's important here is that once one robot learns how to do a task,
every robot the fleet knows it.
Humans don't operate like this.
I wish we did.
I watch my kids like kids like learn how to do stuff and they just don't listen.
Bookins do.
I wish we.
I wish we did.
So 2026 predictions.
What's your boldest predictions for figure?
What is your goals for this year?
What do you imagine?
Yeah.
I mean, we basically, we're spending up Bacu like production enormously right now for figure three.
So you said something like a robot every 30 minutes you expect.
We're trying to get there in the near term right now.
Amazing.
Which you guys saw, we walked through Baku today.
What did you guys think of?
Yeah, it's a lot of humans.
I wish everyone could see it.
I guess it's all secret.
You can't camera through there.
We haven't.
There's a lot of IP there because like you see exposed boards and Xwears and stuff.
People was.
It's cool, right?
If we get some from this video, maybe we can mix it in here.
But so whenever, there's a lot of humans.
When do we start seeing robots building robots?
We will put robots on our Bacu lines this year.
And then phasing, like, basically, like, phasing humans out of there will be a combination of getting more robots there and doing more high volume, like, automation over in Bacu.
Okay. So that's the first 2026 objective.
We want to scale up, we want to scale up robots at Bacu for sure.
The second thing is we want to scale out robots in the industrial commercial workforce.
So we have, like, multiple clients now we've signed.
They are, like, buying or leasing robots from us.
and we were going to get those out at scale in 2026.
We know exactly where we're going.
Geography-wise, what use cases are going to be deployment schedules.
We want those to be figure three.
So we've just retired end of last year, figure two's.
And now we're basically building the arsenal of figure three's out.
You know, that's where we're scaling up manufacturing to get them out to the world and run every day.
We like the commercial workforce because it really helps harden our ability to run robots every day.
What we're here to do is like, we're here to build robots and run them in the world.
And they run in 24-7.
Your ideal customers, who?
I know a lot of people who would love them.
We have, to be frank, like so much demand for customers.
We have, like, we've talked to 50, 100 customers or so in last, like, six to 12 months.
We really want to be, like, kind of all in with a smaller group of customers and really spend time with them,
integrate well under their facilities and, you know, do well.
We're still at this, like, we're still early, right?
We don't have, like, thousands of robots right at these places.
We want to as fast as we possibly can.
But, like, once we get to certain.
Like, I mean, we could probably ship, like, I think we could ship an enormous amount of robots into the current customers we have now.
Like, so we see, like, you know, we're kind of good now for the next, like, two or three years in terms of like we have so much demand.
Like we like they're kind of waiting for us to like ship at scale.
Leasing versus sale.
Is it?
Yeah.
We have like a, you know, we really like the leasing model.
Humans are leased.
Yeah.
So, you know.
You know.
It's bad at it that way.
Yeah.
At least these days.
They used to be a lot. We'd at least these days.
Yeah, at least humans.
So we, like, we release humanoid today.
You know, we won't be, like, we're not opposed.
I think what really matters is trying to figure out how to find the right distribution
and get robots out of scale.
Like, it'll really help us get really good at what we do.
Like, it's one thing to, like, show a demo or whatever else.
But, like, you know, when we had robots at, you know, in a commercial customer last year,
at BNW, like, it was just, it taught us a ton about, like, running it every day,
fleet operations, safety, like, repair and maintenance.
Like, there's a lot of other things that need to come, like,
like come,
come through on the ecosystem
that we need to get right.
So I say,
second thing is like
getting robots out of scale
commercial customers.
And then the last thing,
which is arguably the most important
for us is we want to solve
general robotics.
Yeah.
We want to basically,
like the analogy is like,
we want to build a human
in a body suit that you can just talk to
that has like common sense reasoning
you can communicate with
that has like basically almost like,
you know,
like almost perfect memory,
what's really happening
or what's going on your life.
That can maybe talk to you,
almost be your companion.
I mean,
and then go off and do things
that you would like,
Like, an everyday human would want to do.
And I would expect them to get up to speed on those tasks at or faster than human can.
Is there two different models than driving it?
The VLM model for the body and the physics and the embodiment versus an LLM for conversation and memory?
We believe this all comes down to like one model at the end of the day.
That is one Omni model that is trained early in pre-training that helps fuse all this.
together. But yeah, you could think of it like we need to have speech. We need to have like
language condition policies. We need to understand physics really well. We need to remember things
and be able to recall that easily. We need to have some sort of personality on the robot.
I think one thing that you're going to see more and more is we really want to make this robot
something you can spend time with. And we've been really focused on getting the core building blocks
built. But like over the next year or two, I think you'll see us. I think I just want a robot in my home.
I can talk to you.
Sure.
I can remember things.
My kids come home, like sad from school or something.
I went through about to understand that.
I had the EQ, like self-awareness to see that, talk to them.
Like, I think all this is like something we want to spend more, we're spending more time on now internally.
Is it, is there already a big M-O-E model where it'll have different, like, depending on the task you're doing, it'll run different parts of the neural net?
Or does it?
We have, like, one neural net now that's like, that's basically, there's no, like, libraries of neural net that we pull down.
That's interesting.
So there's no, like, dishes neural net or like, or, like, logistics neural net.
that you saw here.
Yeah.
Because, you know, at scale, like, if you teach the thing, every physical motion,
there's massive number of combinations.
The storage is actually dirt cheap.
Yeah.
But the processing is very expensive.
Yeah, even better.
We've basically seen that we've seen positive transfer now with all this data.
Yeah.
Like coming in, the robot can generalize better with more information.
Yeah, it's weird.
Like, more knowledge is better.
It does, it does cross, like, playing piano makes you a slightly better soccer player.
But also, you don't want to run the whole parameter set.
for piano playing when you're playing soccer. It's an interesting little hybrid problem.
You don't want to nuke it. Yeah, for sure. Yeah, I mean, that's where we try to build best
in the world models here and build a great team that can ultimately deploy robots that are useful.
I think showing like, you know, like this type of usefulness. Like either it's like, you know,
a lot of stuff you saw today in a diversity that is super important for a humanoid robot.
It needs to be able to do everything the human can, which is like, yeah.
And, you know, the distribution curve, it's like, you know, we probably do like billions or trillions of
unique, very unique things in the world.
One of the things you said on our tour that totally tells me you're on the right track is that you're using normal GPUs for the training like everybody.
But the inference time compute is on super, super fast, dedicated non-H-100, non-GB300 hardware, which has got to be, you know, at least a factor of 10 or 100 cheaper and faster.
Yeah, it's also running fully on board.
And it's running fully on board.
So we can basically like do like very fast inference and policy department, yeah.
And it's also not sucking down the entire power of the robot.
Yeah.
Yeah, I mean, you also have an issue where, like, you know, we've also run models off board the robot.
But if we lose communications or have some...
So I wanted to go there.
You know what I mean?
Like, if you, like, lose Internet, it's like hard to do work.
And it's like, most humans...
We hit on supply chain batteries and comm.
So on the comm side, do you imagine we're going to be running like a 6G network on there besides Wi-Fi?
What's going on on batteries these days?
Yeah.
We have, so from a network perspective or, you know, it comes,
a communication back to the robot.
We have Wi-Fi on board.
We have a 5G and SIM card, e-SIM on board.
So we can, the robot, you can text the robot.
You can have a gym.
Yeah.
They can have a network outside of like a Wi-Fi condition.
And then we also have Bluetooth on board.
So almost like a walking phone or something like that you would think of.
So we want the thing.
You really want like connection at all times.
But you also want, I mean, ideally you want a connection all the time.
But you also want the robot to be able to perform work without a connection.
machine. So you really want a lot of onboard intelligence, you know, that we basically, in case you lose internet, the robots, like, not bricked. I mean, humans, for the most part, can do you work without their cell phone.
Not teenagers. Yeah, that's going away.
Yeah. Okay. So batteries, I mean, they've been improving. What's battery life right now? I love the charging mechanism, by the way, for those you don't know, you're charging basically through your feet.
Through your feet. Yeah, no connector. You just stand.
Yeah.
which is stay in a place.
Yeah, that's great.
It's really cool.
What kind of battery life are you getting?
What do you expect in two, three years to get for battery life?
So today it's what?
Yeah, we run basically around like four to five hours per like full charge in the battery.
And if we're starting at full battery life.
And then through full depth of discharge.
And then we can like charge wirelessly about two kilowatts through the feet inductively.
So it's about, and we have about two kilowatt hour battery pack.
So it's about an hour or so for a full charge on the robot.
So we can do like four or five hours on, an hour off.
That's great.
Yeah, it's great.
I think, like, I think folks are over-indexing too much on how long the robot can run on a single charge.
Yeah, I don't expect.
I don't expect that many tasks.
Humans take, like, a few hours in.
You're not, like, you know, go take a little break, like, do this stuff.
So it's like, I think there's, you know, ample time to do.
opportunity to charging, maybe send another robot in.
We also can charge.
We basically can put like this little thin mat anywhere in the world.
Like it could be like a conveyor system or wherever else.
It could be at home and from the kitchen.
You can just charge there while doing work, which is really cool.
So you don't have to have any wires or things like that.
You're pulling from the wall.
I think one of the greatest value ads you're doing right now is people who are over-indexing
on all kinds of weird things because they're, you know,
they're physical beings and they're watching the robot do physical things.
And they're saying, oh, my God, can you believe it can sprint now?
Oh, my God, I can do a backflip now.
Oh, my God, I can do.
And you're like, well, it depends whether you programmed that and see or you tell or operated it.
Or did it actually learn this?
Yeah, I think most of those are open loop.
They're just like replay buttons.
Yeah, exactly.
And it's just so hard.
So when people say, well, how long does it run with one charge on the battery?
You're kind of relating it to your cell phone.
Yeah.
But it's not, it's not relevant in the inflection we're going.
Yeah, I think you just got like the summary here is just like, I need to see like real open.
like, I see like real closed loop control of a robot moving around, touching and moving things,
like a human would.
Yeah.
And that's where the, that's where the hardest problems all sit.
And that's where we've seen, we've seen this huge wave of like human origin explosion.
Like you said, out of China and things like this.
Yeah.
But we've seen this like very steep drop off from getting to that point next, which is even like,
show me a minute of the robot doing curing or something like that, uncut, closed loop.
Yeah.
Real time.
Yeah, and I just like, you just haven't seen that.
And I think, I think you will.
And I think there's like, there's then there's like a lot more levels to go from there.
And that, you know, that took us two years to go from like a few minutes of tabletop manipulation with neural nets.
Yeah.
To a point where we can do like kitchen work.
Like, you know, like room, like room, wide, room scale autonomy.
And that was two years of working seven days a week.
We're here like a lot of nights getting there.
So it just gives you a little sense of like you're not going to.
do that in six months from there. So I think like that there was a lot of both hardware,
low level like software firmware embedded system, sensor and then like neural net and then data,
all of that came together to build this. Like we couldn't have done this work on,
we couldn't do the same work today on a robot that we could go by today.
You vertically integrated. I made the choice to vertically integrate. But supply chain,
how much supply chain ties back to China? I think and like the next like I think by
summer we'll have almost none of our supply chain in China anymore.
And do you buy into the US versus China sort of AI and robot competition?
How do you think about that?
I don't.
I just like I spend a decent amount of time in China.
I love it.
China, it's great.
Like I go there and it's a, you're like, you're watching TV here in the US and just like
this massive conflict and battle and everything.
And then you go to China and everybody's just like trying to help and win and trying to work
and collaborate and feels like a startup in
cubator. And it's just a one way, a one way competition. It just feels like everybody's team human.
Yeah. Team humanity to go win. And it's so great. When you come back here, you're like poison with all this
like stuff online and like articles and television. And it's just like it's not like that when you're
like on boots on the ground and going to do this. It's like let's go as like as like,
as one and go win. And I just like love that spirit of like trying to like just progress this
technology as a lot giant lever arm for humanity to bring like like you know, um,
to bring abundance basically for everybody and just make it make it like a sci-fi future we all want to live it
it's like oh my god it is that is one speed run star trek is what we talk about exactly it's like yeah
yeah i see figure on the moon figure in orbit yeah on the ocean floor um 100% so the equivalent like uh you guys
make your own actuators motors here and part of that is because you want the exponential growth
effect but part of that also is a supply chain just doesn't exist to give you the parts here
yeah in china i mean you're talking about the you we just and i would talk about this the improvements
made between figure two and figure three.
Yeah.
Because you have all of the ability to iterate in terms of speed and cost.
I mean, the figures, the numbers that you shared on cost, I was like 90% reduction.
Yeah, we reduced costs like crazy on figure three.
It's crazy.
I think we, listen, we vertical integrity so we had to.
It would be great if we can go off them by like motors and we can pop them in the robot.
It doesn't work like that.
It'd be great if we can go by hands and just like, like, you know, screw them on to the end.
It just literally doesn't work.
Yeah.
If you go through the engineering work to basically understand how we do comms and power and sensors and failure cases and thermals and, you know, low-level firmware and embedded software.
Like, it just like there's like one of the costs.
Something breaks in our reliability.
Something breaks in that equation.
And you're like left with like hopefully the vendor fixes it or you die.
It just doesn't work.
Another stuff, like the technology readiness of these things are really low.
We would love to have gone out and like bottle of stuff in the early days.
We tried.
And we basically just failed at all of it.
So we have like, okay, we need to design ourselves.
And then now we manufacture.
Like, we do all final assembly and everything here.
Yeah.
And we do that, you know, in some case, because like nobody knows how to do that well.
We do that a little bit for IP.
Like, we really want to control that here and understand like what we have, like, people have access to.
And then we also want to get good at making a lot of robots.
Like, what we need to get good at long term is like, probably a few things.
Getting data at scale that can run neural nets.
Yeah.
And then, you know, basically doing helix really.
well and then making a lot of robots.
And then getting those things out
at the world at scale. A pretty simple equation
in the day. It just feels like the journey
of getting figure up and running must have been
so much harder than it would have been in China.
But then once you have everything built
in-house, all the actuators, you know,
training the neural net and everything in-house,
then you have a massive advantage versus anything
going on in China. Because if you'd been locked into a supply chain,
it's only has certain models and makes.
It's like even if we used like an existing
supply chain for all this stuff, the robot wouldn't be able to
do what you saw today. Yeah.
Just can't do it.
If you go out and buy like a robot, a humanoid robot at the shelf today, we can't get it
to do this.
Yeah.
We've, like, we've, we've bought robots off the shelf.
We've looked at them.
Like, you're just like, you can't get them to do this work.
They don't have the right sensors.
They don't have enough compute.
We don't have the right.
The hardware, the hands, the head, like the, all of these are built around our
neural net stock.
Yeah.
Well, that's the new thing, too. The neural net is, is incredibly integrated with a specific hardware.
If you watch folks that are trying to buy these robots off the shelf, like say from China.
Yeah.
They'll end up retrofitting them themselves with these giant backpacks.
They'll have, like, power, they'll have compute there.
They'll have thermals and have a wire hanging out.
They've got to hook that into the back.
They probably has his own local battery.
They'll hook it in the back of the robot.
Like they have to like take it and they have to like overclock it.
And it's just like a, it's just the wrong way of doing this.
Yeah.
It's like a hard thing.
It's like buying a, it's like doing rockets and like buying a rocket and, you know,
here's we're going to put stage two on the side or something like that.
It just doesn't really work at scale.
It works for like a, in the early days, like hobby.
grade like demonstrations and things like this. But if you really want to do robotics at scale,
you're going to have to go design yourself. Yeah. Looking at the companies coming out of China,
unitary, engine AI and so forth, do you have any, which are the ones that you're most interested
and excited as friendly competition, if you would? Yeah, I think one thing that's great about China
is it just seems like it's like, as you mentioned earlier, like this explosion of like really great
talent and robots coming out of the door. And great, great entrepreneurial work ethic there, right?
It's awesome. Like, it's great. And I think, um,
I think it's just good for humanity and like this needs to happen.
I think the thing that we like,
I think we've not seen is we've not seen any like closed loop like AI like
control from these systems at all.
Yeah.
We've seen a huge lack thereof of that stuff.
I mean,
usually like here's the robots will sell them and they're doing a ton of like like basically
open loop like looking at their hand controlers.
Yeah.
So I think like doing that is very,
it's almost like it's a orthogonal work from designing the like the system the right way
for a full autonomy.
But there's I think,
you know, if we think about like who figure really competes with as our main competition,
it's certainly China. So like as a whole. And, you know, for manufacturing. I mean, for human
human labor. I think just for humanoid, like we really don't see anybody else besides China as a
real competitive threat today. Fascinating. Do you, rumors about Apple getting into the business?
They cut down their car, you know, their car project and the rumors are that they're heading towards
humanoids. Have you heard that?
We've heard this. We've had every major, I've been in conversation with every major tech company in the world last 12 months.
And then Nvidia and Google and even Sam. I mean, everybody's making noises about.
Meta, Amazon. Yeah. Listen, this is like going to be the largest economy in the world. It's like half a GDP, a human labor.
50 trillion dollars. Yeah, this is like the next great place to be. I think it's going to be super impactful business. It'll lead to like a ubiquitous.
goods and services for anybody and age of abundance.
And it's going to be a super fun business, too.
It's like going to build a sci-fi future we all want.
It's going to feel like, it's going to feel like 2080 up in here.
So what you're seeing is every major group in the world will be, we'll get in this space.
You have to.
You have, you have to have a major group being Apple, Microsoft, Google.
I think every major player that wants to do this.
I think the thing that I think is going to be hard is like we're doing like rocket type difficulty and design here.
So it's like if, you know, if meta is doing like, you're building rockets, you'd be like that'd be crazy.
Yeah.
And there's, I would think maybe the humanoid's probably up there with like rocket design.
It's certainly harder from an engineering perspective than when I built Archer, you know, building like electric aircraft.
And that was hard.
Yeah.
That was a very, and either of 6,000 pound aircraft, 12 motors, 6 independent battery systems.
We built our own control stack and embedded systems.
Like, um, it did all the structural design ourselves, things like this.
So I think, um, I think it's probably up there with like some of the hardest hardware and the
in it and you just have got to be all lit.
Let me ask you about that because we were talking backstage at Abundance 360 last year
and you had a, the basic tech stack had six layers of competency.
You could probably rattle them off the top of your head, actually, but yeah.
This is for Archer figures.
Just for robotics prior to neural nets, I guess.
So it applied to Archer and figure.
Sure.
But what were they again?
I mean, like Archer was basically like a flying aircraft.
So, you know, I basically build electric, vertical takeoff and landing aircraft.
Right.
That is basically like a flying robot, that's what I meant.
That has, you know, it basically has battery systems on board.
Yeah.
As electric motors.
Yeah.
Electric motors is just basically a stater rotor gearbox.
Yeah.
Pretty simple.
We have a little bit more sensors than our actuators than that.
But like, for the most part, like, and there's like, you know, those encoders and stuff
in there, things like that.
You have a, like, basically, like, control software.
Like, how do we, like, control this thing and make it move around?
Yep.
And in the case of Archer and Figure, it's very over-actuated system.
So Archer has, like, 24 degrees of freedom.
We have like propellers like tilting.
We have like we have pitch on the on the blades.
Like you have flaps on both the tail in the wing.
Tail in the wing.
So and then, you know,
figure we have over 40 or so on the system.
You have embedded software on board and sensors.
So how do you get to compute sensors and embedded software?
I'll talk to each other.
Yeah.
Then you have like structures.
Okay.
So those are like kind of the core ingredients of like a robot or something like physically moving to the world.
Yeah.
So then my question is, you know, traditionally the employee base would
be like experts in one, two, three, four, five, and six.
And like, they'd be really, really good.
Yeah.
So then you come in and overlay this with Helix and you've got this massive neural network
thing.
Is that a seventh competency or is that something that permeates the other, like, or did you
take all of your microcontroller experts and start training them on neural networks?
Like the next thing, Archer, is like, how are you going to like plan?
And you do it through a pilot.
Garboa, my aircraft midnight is a piloted four passenger aircraft.
So like, who's doing the planning?
and like, you know, basically higher level, like basically higher level behaviors in the stack.
It's all the lower level control and code what to do.
And here at Figure, it's been changing over time, but now it's entirely neural nets if helix do.
So it's like who's going to, I mean, what is the highest level behavior telling the rest of the stack what to go do.
Yeah.
And where's that?
It can come from a human.
I can come from a joystick.
It can come from an open loop behavior, which we see like we talked about before or can come from like a neural net.
That's like doing the planning and reasoning.
So like the kitchen demonstration you guys saw today and released, like the what's telling the robot what to go do next and what to pull the rack out of the dishes and to go grab the cups and not the not the coffee cups, but the water cups.
That's a neural net making that planning.
In the case of my aircraft at Archer, it's a pilot determining when to take off, when to hover and when to transition in the full flight.
And then how to sort of descent.
A different type of neural net.
It's a human biological neural net.
This episode is brought to you by Blitzy, Autonomous, soft.
development with infinite code context.
Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise
scale code bases with millions of lines of code.
Engineers start every development sprint with the Blitzy platform, bringing in their development requirements.
The Blitzy platform provides a plan, then generates and pre-compiles code for each task.
Blitsey delivers 80% or more of the development work autonomously.
autonomously while providing a guide for the final 20% of human development work
required to complete the sprint. Enterprises are achieving a 5x engineering
velocity increase when incorporating Blitsey as their pre-IDE development tool
pairing it with their coding co-pilot of choice to bring an AI native SDLC into
their org. Ready to 5x your engineering velocity visit blitzie.com to schedule a
demo and start building with Blitzie today.
Let's talk into application layers.
So we're seeing your movement into the home besides the industrial base and such.
And healthcare is going to be a big part of this.
Elder care, helping people stay healthy at home.
By the way, you just came through Fountain.
Yeah, through Fountain.
How was the experience for you?
Thanks for referring me.
Yeah.
It was great.
I went down to a clinic a couple weeks ago.
Which one in Orlando?
Orlando.
Yeah.
Headquarters.
I didn't know what to expect.
I've done like, I've done like full body MRIs and the CT scans and blood work before,
but like got there was basically a full stack.
I mean, you know this, but like it's a full stack.
Everything measurable about here.
Yeah, exactly.
Like 200 gigabytes of data.
Exactly.
And it's been like five hours there left.
Got the download like last week.
It was just, it was unbelievable.
Like what was great about it was that I could get basically comprehensive understanding of like my body,
what's happening.
But also somebody there reporting it out.
and I talked to me through how we understand it, what to do next.
And a plan.
And basically build a plan from there.
It was great.
Like, I actually purchased it from, like, I purchased it as well for my parents and things like this.
I think it's just a great gift.
Yeah.
Dave, we need to get you there too.
Why Orlando and why not somewhere else?
I was on the East Coast.
So I popped down to Orlando.
And so it was just easy for me.
Yeah, we got New York, Orlando, Naples, Dallas, Houston's opening, Miami.
in LA. Anyway, back to the conversation here. I can imagine this is going to up the value of
health in the home a lot, right? So one of my visions of the future is you're constantly being
monitored for your blood biochemistries, what your protein levels, your vitamin levels,
and so forth. And that's being uploaded to figure in the kitchen, cooking your meals,
ideally suited for what you need in that moment. And then the whole elder care side. How do you think
about that. Yeah. So my like growing up I grew up I grew up on a farm Midwest and then my parents got
into like independent assistance of living like 15 years ago. So I kind of kind of grew up around like
senior care a little bit in my life. You got into that business? Yeah. Like my my parents own and
operate senior housing like senior housing facilities in the West. So wait, there's still in Illinois.
Yeah. Still Midwest. Yeah. Wikipedia says your hometown has 2,000 people in it. I grew up in like
like Mojikw, Illinois, man.
I think it was like 1,800 people when I grew up.
Farmore.
Yeah, like middle of nowhere.
We had like no, like, no traffic lights, like no fast food.
It was a dry town.
It was just like a whole different world.
Oh, man.
Yeah.
Did that parades for you when you go back home?
Man, it's just like, like,
before my parade going through this.
You imagine.
Yeah.
So you understand the value of, of, of, uh,
a fully autonomous humanoid robot.
Yeah, we got to put like, I'm like really passionate about figuring out how to let, like,
be able to ship robots into, uh, into senior care and letting people age place, age in place
at home.
Yes, yes.
Like even like, you know, it's like, it's, you know, hard, it's hard to get people to move
into like, it's a sit in and pay at living facilities.
Well, how does that work?
So you sold out, you know, three years into the future.
You can't make them fast enough to keep up with the demand.
And then you've got BMW, you've got a bunch of industrial use cases.
But then you've got this in home and you've got like, like, so like,
side.
Maybe I'll like, I'll give you my, I'll, like, you know, level with you on how I think about
things.
We've been sitting in the last like three and a half, we're about three and a half years old,
trying to figure out how, like, what the right recipe is in the first instance of like
what a general purpose, like architecture would look like for humanoid.
We believe we've found it internally.
And we understand what that is.
Yeah.
And we believe we know how to make robots now and put them out.
And we're going to run them really hard this year.
We're going to run them for.
You showed us, what do you quote the.
grid? Yeah, the grid. Could you describe what we saw in the grid? The grid's like my favorite
place here. It's like, we have like four buildings on campus. It's one of our buildings here. And we have
the facility outfitted that we're going to expand like hundreds of robots into that we're run 24-7.
And we have like a little mission command post. Like that's like a second story, like kind of like a
007 like situation room. And you can see every robot there. And it's going to be doing both home and
commercial workforce. We're spinning up like right now to the
facility just got open like this week.
You guys saw it's like squeaky clean.
And we'll start shipping figure threes into it like this month.
So model homes, model factories, model operations.
Well, so within mission control, you think of like watching the robots,
but the robots also have their own vision, which transmits back.
So it's more like, you know, in the combat movies, we're back at the home base.
They're watching the invasion or whatever.
You're seeing through the eyes of the soldiers.
Yeah.
You've got all that data coming back into mission control too.
So if the robot, you know, is 200 and how many in there at any given time?
A couple hundred?
250.
250, 300 robots building a house or doing, and all that video and telemetry comes back into mission control as they do it.
Do you believe that AGI requires embodiment?
There's a lot of conversation that's been put forward on that note.
I think my definition, I think like, so I'm getting the chance right now to spend a lot of time with both like the physical AI and also digital AI at Hark.
So they kind of kind of both, you know, like both.
the bit. And I think when I like talk to AI today or use it, I just feel like it's so dumb.
It just feels like you're starting like a new chat. You're like basically asking it for like
knowledge retrieval. It's like advanced Google search engine. You know, what I view is like,
I kind of like think about like we want to build like the future. We want to be like Jarvis or we
want to build like Jessons. I want this thing that I want to talk to it. I want to talk to me.
I want to reason. I want to have like perfect memory. I wanted to be able to touch the world both
digitally and physically. I want to build
be general purpose, be able to do things like for me,
think about reasoning through things.
We have,
we have Hark now designing CAD
from scratch. It's going out and finding
you ask it to go build a CAD thing.
I asked it to build a basically a monster truck
for my son and CAD. And it's going out, it's like
finding a CAD package. It's installing it.
It's opening it up. It's like learning how to
basically build CAD in the parameters like it needs to look at
for building monster trucks. And it does it. And we can do that
in under an hour now.
Fully end. And just
clean sheet.
clean sheet from a single prompt.
And it's using tools and computers like a human can.
And we're going to give it all the same tools.
Like we're going to give it all the tools that figure uses for like,
for CAD and for FEA,
all this different stuff.
And it's going to learn all this.
And was that the inspiration for Hark?
The fact that, you know,
there's a lot of LLMs out there doing a lot of things,
but none of them are really connected to CAD.
And you have so much experience, you know, from your...
My inspiration for Hark is I feel like we're like chasing,
like, all the big frontier labs are chasing this like very abstract version of like,
like reasoning.
Well, specifically, Anthropic wants to dominate coding and code self-improvement,
and then OpenAI wants to dominate.
I want to dominate, like, a sci-fi AI future.
I want like...
Jarvis.
Everyone knows Jarvis.
I want, like, the smartest person in the world with everybody.
Yeah.
We have like...
There's also the von Neumann.
It's like the idea that these things go out into the solar system
and then ultimately out in the galaxy and start making themselves out of raw materials.
Nobody's doing this.
Everybody's like copying the other frontier lab that's copying their frontier.
lab. Like, nobody's building true multimodal systems that really can reason and understand and have
persistent memory. Yeah. And like that can go out and touch the world and do things. That's my,
my version AGI is like, I can do what humans can do. And humans are not sitting there giving me
Google search answers. Right. Right. Which is what we have now. It's terrible. And in one aspect,
it's great because like this new alien technology like dropped on the planet in 2022. And we're like trying
to figure out what to do with it. And but the other aspect is like the, there's so much the models can do
now. There's such an overhang in the product capabilities. And, you know, we're, we're understanding
that better now at Hark. We're in a better now at figure. And I think we're just, we're like
abstractly getting to a place where we're building like synthetic humans at scale. And these humans
can be both digitally, like work on the computer, use tools. They can physically be there.
But they'll be able to like reason with you, talk, have memory, understand you. And they'll be able to
go off and do anything a human can. Have you been tracking Claudebot now, Maltbot?
Yeah, I've been tracking CloudBod.
It's really cool.
Yeah, they renamed it to Multibod.
I think it just shows you how complacent, a lot of the frontier labs have been.
Yeah.
Where you have, like, such incredible capabilities that can be with very simple harness and very
simple, like, markdown files and very simple tools.
You can give it on the back of Opus or whatever you're going to use.
Can you like magical things for the world?
And we've had that for like for a long time now.
Like, not like it just, it wasn't like they went out and built a new A.m.
model for this. They basically just put some harnessing and some, you know, MCP and APIs around this.
And it, like, basically went out and can, like, basically be your executive assistant. It's really
awesome. And there's, there's a huge area here to give that to every person in the world and make
it easy. Yeah. And we're doing some model development now at heart that is, like, I think,
truly state of the art. And I'm excited about that. And we're also doing some of that now in the
physical world. We're a figure. So we have this, like, digital versus, like, physical thing that I'm
seeing on both. And I'm just like so excited about this future, even next like 12 to 18 months.
The next 12 to 18 months, I think will be like the largest AI transformation we've ever seen.
Yeah. And getting back to your point about like, what do we do with health care and robots?
We're going to make a shit ton of robots. Like we're spending up resources right now, both at Baku you're seeing now and future Baku to basically be able to make like millions of robots.
How long before these robots are your physician, your surgeon, able to actually support, uh,
all, you know, the complexity of a medical procedure.
I think from a hardware perspective in 2026, we'll be able to do, like, from a hardware
work, what surgeons can do.
And I think I see, you know, giving more we're out with our roadmap and things like that
with figure, I see no reason we can do that.
It's pretty fast.
Yeah, it's pretty fast.
I feel pretty confident.
By the end of this year, you'll have a hardware system that, you know, you could basically,
if you could, like, teleoperate or something like that, you could, like,
basically be able to do like real surgery.
Depends what type, but I think like most things,
and then the AI system is just layering on top of that.
Yeah, then you got to get the brain to work really well at these things.
And like, you know, this has got to work at the highest level of like performance.
Let me ask you.
But federated learning gives you an incredible amount of knowledge.
I think we're like, I think we're very close to this work.
I think we're, we've already shown if we can get the right data and the hardware,
if the hardware can do it, like if the, you know, the simple like,
hack is if you can tell you out the robot to do it, we can learn it.
Yeah, I mean, that's a really important point.
We'll even understand. If you can teleoperate the robot,
if the mechanical systems, the motors,
the fidelity can be done.
Yeah, we're just like, and then we're like dumping on teleop.
But like, tell you've got one good, a couple good things where like,
it's a really good testing tool.
It proves out the hardware. And if you can't teleoperate it,
you're not going to be able to learn it.
Meaning if there's restrictions in the range of motion or payloads,
you pick up something heavy, the robot can do it during teleoperations.
It's not going to be able to do it in a learn policy.
So I think if you can tell you up, you can learn it from a
hardware perspective. I think we'll be able to be there in terms of like more dexterous type
things we talked about here. And then I think I think what we've already shown is if we can get
the right data for it, we can get the hardware to basically do anything that's capable of.
And then you can add infrared ultraviolet. You can add all kinds of additional sensors into
the system. Yeah, for sure. I mean, we have it now. We have it with taked up with like the palm
camera is a good example. Like humans are on palm cameras. And we've been, we've been now
we've been now seeing a boost in performance. Maybe I do. A lot of cool things. We're reaching
in a cabinet now. We can use a pump cameras.
As soon as they on the tour, as soon as I heard it, it's like, duh, totally makes sense.
It's great.
Yeah.
I mean, how many times a day are you like reaching?
Like, you can't.
We're kind of blind.
I'm putting your phone down there to get the camera to look at it.
Yeah.
I'm sure we would have evolved an eye right here if it were physically possible.
It is an interesting question for you.
You've got the cameras in the head, again, mirroring a human in the hands.
Why aren't there cameras rear-facing or 360-degree facing?
Maybe there are.
I just bought an amazing drone.
on the anti-gravity drone, have you seen it.
It's the VR headset.
It's got 360 above, 360 below, backwards, forwards.
And it's extraordinary.
So how do you think about it?
We do. We have it on the robot.
You do it.
They all have backer facing cameras.
Okay, I have to ask this question for our moonshot-mate.
If you just like, you know,
we can go over and look behind them.
They have cameras in like that.
Okay.
Yeah.
I'm seeing it rotate here on here.
So one of our moonshot mates,
Salimus Mal, you might know him.
He's one of co-founder is Ray at Singular University.
He's like, why in the world are there only two hands?
Why don't we see robots with like four hands or six hands?
So to put that to bed once, we can ask us a lot.
It's like, why not like, you know, why not like superhuman and all these different things,
which is all of the questions.
I think my summary to this is like our goal is to be able to do what humans can.
And then you want to do it the cheapest and like lightest possible way you can.
like the lighter the better for safety the cheapest is obviously very important all of those
will affect manufacturability and scale when you start building things that are better than human
in a lot of ways like if it can you know run a three-minute mile or if you can do backflip
if it's like like you know a bunch of arms it's going to make the robot really heavy
to make it really costly it's going to be really hard to manufacture and and then your question is like
okay when I look at like the logistics use case I don't think you can actually have four arms or
six arms and move any faster. The line is like relatively, it's like, you know, maybe a meter or so
depth. You got to kind of get a package. The package needs to be roughly in the center of the
conveyor system. So the scanner below it can scan it and put a label on. So, you know, in that case,
we basically have another three to five X in terms of speed and the actuators that we can run. The software's
not enabling because it doesn't know how to do it yet. So we can run like three to five times
faster than what you saw today. Wow. Because we have the whole body to run. Yeah.
Yeah, we can run the robots that, like, we look at it in terms of radiance a second,
maybe traditionally look at RPMs.
Yeah.
We look at radiance second here.
We have another three to five times headroom and the actuators that you're seeing now.
I would love to see a robot.
The thing is the cost of a mistake, like, you know, when you're on loading the dishwasher
at the current rate of speed, the cost of mistake is relatively low.
You start running three to five X faster.
It's just like, that thing glitches.
That plate is moving fast.
I just don't know if it's really needed.
Like, you're going to get a really expensive robot and it's going to be like less safe.
to be a harder manufacturer.
And then you're going to have like a, you know, over time,
you're going to get the robots down to $10,000 to $20,000.
You're going to have a really expensive robot.
Let's call it $50,000.
Yeah.
And like, and cost is really a function of manufacturing volumes.
So you really want to build like the car.
Well, that's why going after the industrial use case is such a no brainer.
Like the home and he's like every like this for this.
Well, the home is, the home is huge in the end.
But if you're running three to five times faster than what we're seeing right now in the home
and you, you know, you kick the cat or something like that,
That's not great.
In the industrial use case, everything is kind of taped off, you know, and it's...
I remember I was interviewing you for my next book, which comes out in April.
Here it is.
We are as got us.
We've talked about this, but I'm super excited about it.
And, of course, you and figure are prominent in the book.
Because this is godlike.
I mean, it's extraordinary.
We're giving life to new systems.
I was interviewing you about how many and what the price point is.
Yeah.
And I want to just double down on that because the numbers are pretty staggering.
and they make sense.
So if you're actually getting the price down to $20,000 a robot,
I haven't heard $10,000 a robot, but $20,000 a robot,
you're leasing a robot for like $300 a month, $10 a day, $0.40 an hour.
And then you ask the question, okay, if it's really $10 a day,
how many would you own or would you have?
You end up with a lot of robots.
So what's your estimate on the number of robots on planet Earth?
2035, 2040.
Where do you think that's going?
I mean, I think it's relatively straightforward to think that every human should have a humanoid to do all your work.
And then we should have maybe in order of like five to seven, maybe 10 billion in the commercial workforce.
So I think, I think if all goes well, I think you could basically build tens of billions of humanids on a planet.
Okay.
Yeah.
I mean, you're basically building like a replica of a human that's really cheap.
That works 24-7.
Yeah.
And so like there is really no.
And then, you know, we will be at a point, hope in 24 months for all the robots will build all the robots.
Well, that's where I wanted to ask about skill, because you said, you know, we're going to ramp up to millions a year.
Like, well, one per person on the planet is $8 billion.
So millions per year really isn't that much.
So then you're like, okay, the self-improvement loop is going to be incredible here.
Yeah, you also need like, it's funny.
We talk about this, but you also need like tons of working capital.
If you put a billion robots on the planet, even if they're, let's call it.
at $20,000 a piece, you're talking $20 trillion of working capital.
I mean, you're not that many.
There's a billion cars on the planet right now.
There's not, like, more than that.
But if you tried to, if you tried to build them in five years,
it took 80 years to accumulate those cars.
Some of those cars are 30, 40 years old.
We have a couple billion cars on the planet, but we have, like, we make a billion
or more cell phones a year.
So, like, yeah.
And I think this is more cell phone-like, where it's going to be personal.
Like, you're, like, I don't, we even go back and forth on, like, if your robot
breaks, do you want, like, a brand-new refurbish robot?
or do you want the old robot you used to have because you've known it and you understand it.
It's got a personality.
It's like, I think it's going to be with you.
It's going to know everything about you.
Why would you just have a personality transfer?
You could.
But I think there's like some inner workings of like it's got like all the, you know, it's got like
your thing and it's got like a little bit of a feeling.
But yeah, they're like for sure.
Like I think that'll be fine.
Let me ask the geeky finance question though, just before we lose the topic here.
Sure.
So if you have an all neural network based system, it can learn at an incredible rate.
The technology is advancing remarkably.
You look 24 months in the future.
The demand is on the order of billions, not millions.
Like you said, to build that out in one iteration, you know,
use the cell phone as an analogy,
but Apple had 15 years to profitably ramp up production to a billion units a year.
Yeah.
And so the demand is there to do it in one year.
Yeah.
But you would need a trillion dollars, some insane amount of,
capital. But that's not longer insane amount of capital. I mean, we're seeing. I think you can.
So what do you do? You leave the world starved asking for their robot for five years? Or do you
raise the trillion dollars? If you look at like credit card receivables or car leasing, these are
trillion-dollar markets. Yeah. You're in terms of financing. So I think the financing market's
there for this. What do you do? I think like one is you got to solve the neural net game.
You have to be able to scale with neural nets and you have to solve pre-training and you have to
solve a generalization. So you have to solve for a general purpose robot. That is like,
that is like tables. You have to, you have to solve this. That's why we're so. We're
so obsessed with like trying to solve it here, figure.
If you don't solve that, none of this matters.
The second step is you have to have robots in the loop, like building other robots.
So those two things have to be solved.
And you have to design the robot in order to make sure it's, it can hopefully design itself.
Yeah.
At the end of the day.
So like, there's a bunch of stuff we're putting in place in terms of like manufacturing execution
software, the lines, all the design of it.
So we can at scale have humanoid go in, building other humanoid and get them off the line.
Yeah.
And so I think like,
I don't think, I think this is like a, and it took us a while to kind of, you know, I think these adoption curves are shortening and shortening.
And I do think if we could solve a general purpose human order robot today that could do everything you wanted.
I think we could ship a billion today.
Yeah.
Say again.
I think we should ship a billion today.
Yeah, I totally agree.
So basically it comes down like, can you get the neural nets to work at scale?
Can you get the models good enough to generalize that this is scale?
Yeah.
Call it a general purpose robot, like a human and suit.
Yeah.
And then can you get robots in a loop and other robots?
Well, the other thing that's really compelling, it's like the neural net is the only IP you need to protect.
So as long as you have the federated learning coming back to the mother ship and all the training is happening centrally.
Like, you know the Star Trek Genesis project, right?
You got a little capsule.
It has basically the germ up.
You could ship literally a box to Kenya that's like, here's the figure box.
It opens up and it starts making a figure manufacturing plant right out of thin air in the middle of Kenya.
And if there's capital there to bring the resource,
sources to it, then that's how you get infinite scale.
Well, we can't put that.
The innermost loop is energy and AI intelligence.
And, you know, like local mining for the materials or whatever, but it's completely
self-contained.
But the key is that you just unlocked that capital that wanted to build something productive
while all of the IP is still flowing back to train the neural nets centrally.
Yeah, 100x to GDP of that jurisdiction.
There's latent capital all over the world.
So we talk about, you know, there's a lot of fear out there in the world about
losing jobs to AI and to robots and the reality is the conversation has shifted
now to well no this is going to create massive abundance and universal high income
and that happens if in fact the rather than the company hiring a robot to replace me if I
hire a robot to go out and do my work for me and in fact it's able to get triple my
salary because it's working three shifts yeah and it's doing that for me and then it earns enough to get a
a second robot working for me.
Yeah.
And so the question becomes, where is that capital captured?
And is it inside the hyperscalers?
Is it inside of the individual?
So that's going to be the interesting conversation coming up.
How do you think about that, Brett?
I mean, we're going to sell robots at scale.
You're going to be able to deploy as many robots as you want to whatever you want to do.
Yeah.
Notice to whatever you want.
Like no instruction manual.
What do you want it to do?
It'll learn it.
It'll research the internet.
It'll use digital tools if it needs to.
It'll talk to you.
It'll reason.
Um, future is going to be safety and privacy. Let's talk about safety in the home and privacy
in the home. You know, there were lawsuits, uh, over the last years with, uh, with Google and Amazon of
it's listening to you in your bedroom and so forth. How do you address, uh, safety and privacy?
Or is it just, is it going to happen? It's just too early because we're not. I think there's
just a really hard questions to answer like in one, one go because like there's a bunch of different
safety implications here that are like just,
safety is like probably the number one thing to tackle to get robots into the
homeless scale.
Yeah.
There's like a semantic understanding of safety.
Like if there's a, you know,
a candle lit and I knock it over by accident or if there's like a, you know,
boiling pot of water if I hit it, like just understanding how to be safe in an
environment where humans are at.
Yeah.
And there's actually the intrinsic safety of like,
can the robot be with humans and animals and pets and be safe.
Yeah.
That has to be solved.
We can talk at length about how we're going to solve those problems.
And then you have the whole privacy, cybersecurity, other aspects of this.
I need to be with good intention, like, how do we solve those problems?
We are working on all of those now.
They are very difficult things to go get right.
I do see a path where we can build intrinsically, like really safe robots around people and pets.
Yeah.
We have a plan for how we're going to do that.
I mean, they could be safer than humans by a large margin,
just like autonomous cars are safer than humans.
End of the day.
Yeah.
These have like superhuman perception.
We can see basically all around us at all times.
We're always on.
We're always computing like what to go do.
We're, you know, so I think, you know,
assuming nobody's trying to be like, like, you know,
mean to the robots or things like that.
I think we should be extremely safe around everything we're doing.
And then as we're on privacy, like, you know,
these are going to be in your home.
So being upfront about what data we're collecting and where that data is going and how we're keeping that data private and encrypting that data.
All this is super important.
We have an entire team on cybersecurity here in house on both of the product and commercial side, a corporate side that are working through like, how do we think about this at scale?
Right now, they're great.
They're from like the big companies have been doing this for a long time.
And we think about the corporate side as well as the product side on the robot.
site as well. Yeah. Your facility here, which is your sort of prototype manufacturing facility,
50,000 robots a year, you imagine? That facility can support about four lines. Each line can do about 12,000
units a year. So a little under 50,000 units a year. What's your next step up, do you think? I mean,
we're building like thousands of robots right now. So that's the big push we're doing right. I mean,
you just, you saw it today. Like that's the the figure C stuff we're doing off the lines today.
You know, and then there we want to go to tens of thousands and then hundreds of thousands and millions.
I think we need to take those like steps as a company to go do that.
This facility will top out 50,000, a little under 50,000 units a year at full capacity.
So kind of think about a long term, like our, you'd probably be a low volume when we look back in five or 10 years and be like,
do you think you might franchise out the neural net and the circuitry around it?
You know, because all these other people are saying, oh, I'm building a robot that cleans industrial pipes.
I'm building a robot, you know, all these different form factors.
No.
No, just keep it.
I think it's super unsafe.
I think we see these robots out there like this.
I think they're around humans.
We don't have,
we don't own the hardware.
We don't know what they're doing.
It's like our neural net in it.
Like I think it's,
interesting.
Yeah,
I think it's like a,
it's similar to Archer when we were doing an Archer,
like,
build an arch out.
Like I think it's like,
it's like a safety critical system.
Yeah.
Especially like Archer and since they're like,
licensing out to their folks and stuff like that is like very problematic.
Yeah.
I think here it's like same thing.
Like,
human hasn't even done right.
Like we have a fiduciary duty to our civilization to build like really safe.
humanoid robots at scale.
Yeah.
And like just giving this AI system or even hardware to anybody that would want like this is like
not something we will entertain.
So then when do you branch out under other form factors?
Like, you know, things that work underwater.
I don't think I think the amount of, I think in the future, everything that will move
will be a robot.
Mm-hmm.
Besides humans.
Yeah.
And within that, I think human noise will dominate the plurality of all robots.
It'll just be so bigger percentage of them.
Like the other robots will be like niche and expensive and done.
They're like super duty trucks that you have like outmining.
They'll just be like made for specific areas, maybe underwater as you said or some
like heart surgery.
Yeah.
You've got or brain surgery.
You've got these very, very fine too.
It's like a robot controlling a robot.
I think you're left with like very expensive equipment.
That's very siloed.
Like you really want to build a general purpose machine that can,
they can learn across a variety of different tasks and have that transfer learning.
I think that's extremely important here.
And that needs a very high variety of rich data.
This is only going to help the robot system get smarter and better.
So my view is, I think it'll be like humanoid robots on humanoid robots everywhere
in the planet.
And there will be other robots there, but it'll just be like a niche businesses.
When I was calling up here, I posted your video that you,
released on Helix 2 today and asked the community for questions and this blew up with a whole
bunch of amazing questions. So one of the questions is do you have a blooper reel and can can folks see it?
And then what's the weirdest task someone on your team has tried to teach it to do?
And it absolutely did not work. And that's from Ben Casper here.
Ben Casper, nice. The weirdest task. It did not work.
Well, like, weirdest task.
Listen, every-
The jogging was interesting.
Well, okay, jogging was fun.
Jogging was cool because we, like, really had a steerable jogger.
And a lot of this work in like running has been like, again, open loop.
But we had a steerable like RL controller we could do.
Another one, which I actually have a gift for you.
It kind of goes out.
Okay.
I have two, two-figure dead mouse hats.
Okay.
What's that mean?
We basically, we opened at Red Rock.
late last year at a Dead Mouse concert
and had robots on stage.
So we generally don't venture out
into weird of stuff.
There you go. Nice.
And we actually,
we had Dead Mouse at the last two holiday parties.
That's a figure, which is like fun.
And we generally are pretty much like,
how do we design something really useful?
But then we've had some pockets of time
to do fun stuff like this a bit.
So I think like having robots on stage at Dead Mouse
and Red Rocks was just, I flew in for it.
It was unbelievable.
And you had them on stage.
We had them on stage.
We had several figure twos on stage, jamming.
We had them all synced so that synced to the music as it danced, which is really cool.
So what it heard, it was like moving towards.
I had it on stage last year at the Abundance Summit, but figure wasn't with you.
So I need to get you back there with figure in the loop.
Totally.
Yeah, for sure.
So when are we going to see the first figure in a customer's home?
The next question.
Yeah, we want to, we want to, I want to ship robots when they're really ready.
I don't want to ship slop.
Best guess.
Um, young, earliest latest window.
We, we, we, we probably, I think last year I said in, you know, in this year in 2020, um, in 25, we, uh, in 26, we launch, we launch a robot to do like end-to-end homework, like in alpha testing, like in my home to do like full, like, mopping cleaning, full scale, like, in the long horizon work.
Figure you and your daughter.
Yeah.
Putting stuff into.
You've done like pockets of work really well.
Like we've done like dishes and laundry and all this.
And we can like,
you're seeing some that's getting tied together now,
but like I want to do it across like days and weeks of work.
And I want to be able to drop it into somebody's home as ever seen and also make that really work well.
And I want to be able to talk to it.
And I want to be able to understand me and be able to remember things.
And we'll show us up.
I'll be able walk through a room and show it like almost like a visitor you have at your house for a week.
I'm like,
understand what to go do.
2027, 28, 29.
My,
my best guess is I think,
you know I think like well I'll tell you we're working until midnight every night to solve this problem
maybe we're here to give it it's like it's like we are here every weekend every night to try to figure out how to solve us
this is this is this question we got on a general robotics this is kind of where we went ahead
um I think by end of year we'll we'll be able to put a robot into an unseen home and be able to do
fairly long horizon work and then you want to measure how many like human interventions you have is it every
it's once an hour was it once a day once a week once a month and I think we'll do that I
I think that would be a huge accomplishment for us.
I think we'd be on the path of solving general robotics.
And then I think next year you'd be on a path where you could ship them into
users' homes and start making sure they work well.
So I think anybody tells you, like, hey, we're going to ship them or teleop them in the home
or we're going to ship them at scale in a year.
Like, you've got to ship in a small quantity and they've got to work well.
And then you've got to work out the problems and you've got to then ship again.
You have to have an iterative design roadmap, which we have here.
We need to learn.
So it's going to work well at one.
It's going to work well at 10 homes.
It's going to work well at 100.
It's going to be 10,000.
There's going to be 100,000.
It's going to be a 10 million.
So I think it's going to be like super steep.
Exponential growth curve.
Exponential growth curve.
Is there anything to worry about there in terms of time to market?
Because, you know, the industrial use.
Like I said, you're sold out for years to come anyway.
Is competition going to come in and grab market before?
Yeah.
We feel with your kids or something.
The work we show today and the work we showed two years ago has never been done in my mind,
from whether any other human or company in history.
Yeah.
And so that, if that's the marker, it's whenever somebody can do the Kierig test
for a couple minutes with uncut film and I can like watch it close loop do it.
Yeah.
With bi-man limit, not even just standing.
That's your two years away from where we're at.
So I think we'll see.
Like we're trying to push and continue to pull ahead.
But I think hopefully by next year, we can basically really show like real general purpose
in this other robot, maybe even as soon as this year.
Like, I mean, listen, it could have.
happening in a couple months. We are we have the right stack now. We are we are building data sets at
scale like so quickly. We are spending so much time and money on this internally. We just launched
our new B200 cluster within a video helped, Jensen helped that went live like this year.
How many how many GPUs in your we were going lot. We have we have 3,000 B200s that are going
live and we have another set of a much larger GPUs that we plan to put out here.
And we just use it for pre-training.
Yeah.
Pre-training.
Can you do it here physically here?
We do it here?
A lot of power.
A lot of power.
So Jay Crate asks a question to the science fiction geeks amongst us.
So what's beyond the three Asimov's laws for you?
Have you thought about that?
Have you thought about sort of fundamental laws to program into your robots?
I think you really want to put these.
rules down and tune the kind of non-vile tile memory on board the robot at the chip level
at a substrate level yeah um i mean you must have thought about that we've been thinking about this quite a
lot like um and you know it's in one hand we still want to solve like general purposeness
in the other hand we don't we also want to figure out like once once we're like close there how do we
also get all the supporting things ready to go and this is one of those it's like it's like safety
needs to be there, like privacy needs to be there, fleet operations, like the reliable to the robot,
the like maintenance plan for like how we're going to service this and everything in the business
model, all of it and financing, all of it need to be packaged ready to go. So we're working through
all these now. I don't know. It's funny. It's like, it's like, you know,
Asimov got a lot of things right. And I feel like a lot of the three, like, you know, like these
foundational rules for how do we treat human is like, you know, we, we, you know, we,
We have our own spin on this that we won't, like, publicly tell today.
But like that, but like, you know, the goal is like to do good work.
And is this something everyone learns internally and corporate training and
memorizes and all that?
It's something that we want to put, we put it and are going to continue to put on all the robots.
So you have a newborn.
A new one?
You have a newborn child.
Oh, yeah.
So the question here from KK says, when would you trust figure to hold your newborn?
Yeah.
That's an interesting.
So the new, the figure three.
is soft. It looks like it's designed for the home, but it's still about hard to
my pounds, right? I think this is the same. I like this question a lot because at Archer, I always say,
like, until I put my, me and my kids and family on the board, it's not safe enough to fly anybody.
Yeah. And like, I wouldn't do that today at Archer. And I hope soon I could do that.
I figure here, I think it's the same question as like when I feel safe enough to have a robot
in my home. Well, you've had in your home, but. But like, you know, I've been there. We've had folks
there and, you know, we monitor it.
Yeah.
I think we're like truly safe.
And we're not there now.
And I think that's a great bar for us to hit.
It's like when I can put a robot in my home fully autonomously end to end around all my kids,
I think that's a point where I would trust it.
I think it's a point I would say like this is ready for everybody.
And it's a good, it's a good like heuristic for us to really try to hit.
And that's our goal here is to be able to put it like, you know, like free rain in my home to
do.
And now we like, you know, we're there with it.
We babysit it.
and like we watch it and it works good.
I've showed videos of the robot.
I've been kids like with the robot like there,
but like,
you know,
I think we're doing it in a safe way.
And the robots have been totally safe,
which is great.
But like,
one is we need to build like a system safety architecture
that's really,
really fault tolerant and redundant in real time.
And we've done that and we're doing better job of that in the future.
And two is you have,
you just have to build a safety track record for this.
There's nothing better than like actually proving this thing as can be safe.
Well,
it's a nice barrier to entry too.
If you, you know,
kind of take the Apple.
road to it's got to be a great out-of-the-box experience. Well, that means not stepping on the cat.
Totally. Certainly not dropping the baby. And then the cybersecurity side of it, too, not transmitting
everything back and having it posted on the internet. But if you get that reputation, which
it sounds like of all companies I've met, you're perfectly positioned to get that reputation.
Yeah. Don't make a mistake along the way. And then everybody just says, you know what,
I'm going to choose a figure robot because I just feel, it's the same way people feel about
the Apple brand with cybersecurity. Yeah. So I think I hope people walk away from this.
knowing that like general purpose robots are coming.
It feels very close.
And then there's a lot of other things around there that, like, that you have to get
right to build this at scale.
Is that your main message you want to get across here, you're watching?
I think the main message we feel every day, if people are excited about like AI and robotics
is that this is going to happen really soon.
Yeah.
And it's happening.
I mean, people don't have, I don't think people have a clue of how fast this transition
time is going to be.
I mean, just go to our YouTube and watch our videos for last two years.
They're like, and like watch them side by side.
It's dramatic that change every single year.
Yeah.
I mean, you saw it today.
in person and our robots now are you know have been in like you know customer sites and things like
it's been out and we're going to continue to shut more but like it is hard to feel because you don't
see it every day but at some point you're going to walk out probably in san francisco would be the first
and you'll see more humanoid than humans yeah and i think that'll be an amazing day right now i'm
driving in san an son of santa by the way we just did a podcast early this morning kathy wood who sends
her best oh cool she's a huge fan of yours have invested to me at both archer and figure
And I'm, she's great.
Yeah.
She feels the same way.
Yeah, true.
She is very proud to be an investor in figure.
And I was telling her, you know, when I'm out in my kids right now in Santa Monica,
we do something like counting the number of Waymos that we see.
And it's crazy.
We'll see like 10 Waymos.
And then the Coco robots, little ground robots, like the Starship bots and such.
I mean, they're all over the place.
Crazy.
And it's interesting, right?
Because you, the first time you see it, you're playing out your phone,
you're taking a photo.
It's really cool.
And then you take it for graded it.
And then it's in your way.
Yeah.
Right.
So my wife and I, uh, anytime we go like, we had like last, last weekend with date
night and, uh, took away mode downtown.
And it was just, it was, it's just so unbelievable.
And the experience feels like, uh, it's, you, you just as a, you know, as an engineer,
like working on these like hard projects.
I feel like the like the amount of engineering work they had to go do to put it
together safely.
Google did.
You know, you control the controlling of the music and the lights and the environment.
Like, if you take a New York.
city cab and you get in the back and it's like this smoky hell and then you get into a Waymo and you
turn it into your little paradise it's like such a night and day job taking the product right
I mean Larry Page saw the product win the DARPA Grand Challenge back in 2005 yeah yeah and committed
to it and and brought the team I mean I think it's been like 16 17 years yeah and just they
stuck with it you know an astrotelor at X basically built it out and then Waymo's an amazing
amazing product.
They've been like undeterred for like 16, 17 years.
Like, don't worry about it.
We're just going to make it and they did it.
And it's unbelievable.
Yeah.
No, amazing.
It's very inspirational.
Can I ask you my geeky sci-fi meets geopolitics question de jour?
So I just got back from Davos on Friday.
Today's Tuesday.
So nine times zones away.
And the big topic at Davos, of course, is Greenland.
And all the Europeans are saying,
Greenland could never possibly be mined.
It's impossible to extract.
minerals from this frozen cold tundra.
And we have some family mining operations in Minnesota where it's not nearly as cold, but still
pretty damn cold.
You don't have mild thick ice sheets.
We do not have mild thick ice sheets.
But I think if you're talking about a billion and then eight billion robots and you need
the materials, and that's the only constraint, and you have robots that can operate
in my asteroid, buddy.
Seriously.
You think we're going to be doing
Astorites before Greenland?
No.
We'll do Greenland first.
But you think Greenland is viable.
Like I'm not talking about 20 years from now, too.
I'm talking like, if you want to build a billion robots and say six years from today.
It's a $50 trillion market place.
Yeah.
So, I mean, that demand that tries.
Don't you think you'd find a way to get through the ice, you know, given a million robots working on it?
I'd hope so, yeah.
I think we'd find, like, maybe better, like, maybe even better physics, but definitely better
engineering solutions for this.
And then we would be able to put unlimited amount of capacity of humans at it through
humanoids.
Yeah.
Yeah.
That's what I'm thinking too.
Yeah.
Because the machinery, the machinery that I see is like, it's massively automated.
It's still driven by people.
It's still operated by people.
It doesn't need to be.
Yeah.
It's just like, it's crazy that shit works, right?
Yeah.
Like the humanoid, like they're just like the neural nets.
It's like, it's just, it's just.
The thing is when you make it work on unloading the dishwasher, people don't realize how
close that is to work on every other task.
The dishwasher and like folding laundry
these things that we're already doing are like so hard.
Yeah. They're like such hard tasks.
Like you have like these compliant materials that are all
changing with you dynamically.
Everything's not in the right same place.
It's like very different than being in a conveyor system
or manufacturing or something like that. And they aren't going to do it today.
Yeah. We can do it. And now it's a matter of like doing it better.
Yeah. And doing it like, you know, higher reliability
across more diverse, you know, across the distribution of what humans
do every day. But like that's a data play.
The thing is, if you achieve that goal by hacking together 100,000 lines of C++
and teleoperating it, it would look the same, but it would be nowhere near
as conquering every other problem.
But if you did it purely, it's nothing but a neural net and it's purely trained.
That means you're within a millimeter of every task.
We feel like the millimeter here is just data.
Like the only difference of why I can do the logistics and now I can learn like, you know,
towel folding or why can learn like dishes or.
whatever we end up showing in manufacturing, literally it's just data.
Yeah.
It's just data goes in the neural net.
Now it can do this work.
Because the robot hardware doesn't do the updates.
It just uses a new neural net weights on board.
You know, I think like we're just bound by data now.
And I think that's like the, it's like, you know, it's like not a trivial thing to do to
get the right pre-training set for this at scale.
But like we have a bet that I think will work.
And we've been deploying that at scale for the last three or four months.
Yeah.
And I think, well, stay tuned.
I mean, we're working through it.
So like, and I hope this will lead to really,
I think you'll see a lot of positive transfermers from a robot
that's able to like generalize a lot of things.
Yeah, amazing.
One last thing before we wrap up, I would love, can we pull the camera in close
and maybe give us a tour of figure three?
Yeah, let's do it.
Thanks for the close-up and intimate tour.
So figure one.
Figure one.
So we basically
One cool thing about figure one
is we designed most of the system in-house
We didn't care about looks
We cared about on blocking the AI and controls team
It's like something they could use
From a software perspective
So we designed and walked this robot in under one year
So I incorporated the company
I think it's probably one of the fastest times in history
That's a lot of parts, man
Did you draw this by hand?
Did you draw this?
David
Basically our design lead
Designed this
Not as prettiest robot
But I think it has like it had what we need it which is like a functional robot we can get up off the ground and start using for like all the a gas policy deployment
We did the cure k cup with this robot
Oh, and they move the hand to you can definitely move it. Yeah, sure there's gonna be a collector's item someday yeah this thing you bought it so it's heavy
Yeah, it's about maybe a hundred and thirty hundred forty pounds that's not that different from now
Yeah, not bad that's all it's all CMC. It's all CMC really cnsc aluminum most of the structure yeah, yeah, uh, and then we're
What else do we know about this before we moved to figure two?
We basically, we, we didn't, we wanted to care about speeds.
We didn't really care about wiring, some electronics, like a lot of the design.
It was mostly just like get a functional humanoid robot out so we can do development on.
So we did that.
We built a few of them.
We did a lot of, like, we did our first neural network on this robot, which is like, I think, was phenomenal.
We did so much development with it really quick.
Yeah.
We also learned how to build actuators, battery systems, wiring, structures, kinematics, joints.
Like, all this is like stuff we learned.
different sensors.
And then we used all this
and we integrated it now into figure two.
So you've got the cost down
I know from two to three by 90%.
What was the cost from beer to there?
Probably another 90%
About the same would be frank.
Wow.
Yeah, a lot of it was machine parts
and we moved out
the tool parts, the three.
So you've got two cameras here.
Two cameras here.
We have a back camera.
We have a unit.
You can see?
Yep.
We also have the cameras right here
in the torso pointing down.
So we can see where the feet are out
in case you have a box included.
Come take a look at the
at the camera, at the back of the robot here, one second.
With the camera pointing down.
It's right there in the public.
So back here you've got what's going on here.
So there's camera ports here?
Yep.
We basically have a camera, a backward facing camera.
We have different ports for like debugging.
If we don't hook up like a, you know, cable to it.
And we can also turn the robot on and off from here.
Amazing.
Yeah.
And then basically we removed all the wires internally to this robot.
We, all the structures is exoskeleton.
So all the exterior loads of my, almost like my aircraft to Archer.
Sure.
The skin, outside housing, all loads.
We do the same thing here.
So out the outer shell took all the loads.
We have our second generation actuators.
We had our third generation hands that are on this robot.
We have like more cameras on board.
We have about I think double or triple the amount of compute
and about double the batteries,
a battery capacity onboard.
Yes.
And the degree of beauty went up.
Yeah.
Yeah, significant.
Yes.
David did a good job making this like much, much more presentable.
So funny, this venting heat out the armpits.
Just like, just like you all.
So, yeah, it actually sucks there and here.
We push it out through the torso on the bottom.
Okay, what's going on in the back of it?
Yep, those are like, we basically have these different pattings on the knees and some parts of the arms.
They basically make it so that if you basically got your finger stuck here.
Oh, safety.
Yeah.
Maybe it would hurt it, but wouldn't like cut it off.
Yeah.
So like, so somewhere, maybe we see like a car door.
Sure.
Uh, today.
And here's the workhorse.
This is our, yeah, this is our figure three.
Nice.
This is like here.
Yeah.
So we basically, uh,
couple things. We made the robot like much skinnier and lower mass, but kept all the speeds
and torques the same. So it's just as powerful and just as fast, but also like kind of skinnier.
What's the mass? This is about 135 pounds. This is about 150, a little over 150 pounds.
We carrying weight, how much weight can make a different hand? About about 20 kilos.
Completely different hand. The hands have a glove, tactile sensors, compliant material on it for better
grass and also a camera. All the parts basically are most of the robot is soft wrapped. You can see a
kind of even up here,
a squishness to the chest
and different parts of the robot.
We have like no more like,
like our very few pinch points in the robot.
What else?
We reduced the cost massively.
We have a better thermal system,
compute system.
We increase also compute on this robot as well
from the last generation.
We have new feet that have a toe.
You might think of the toes like,
yeah, I know it's a major part of the help.
It's helpful for like,
it's a passive toe on the foot,
but you might think of this like,
it helps to walk better,
but it's not just that, but when we get down on our, you know, get down here, we're on our
toebox, really helps basically get the range of motion.
Without that, you might need more joints.
Fred, talk about the face, because this is a big question of, you know, do you develop,
do you show facial features or not?
And you went, what do you think?
What do you have to Westworld or are you having a robot?
Wow.
I mean, I'm, I, it comes across, it's beautiful, right?
I had to be a beauty.
and it comes across sleek,
but it could have like a negative,
like a little dystopian feel with a black face.
So we have three screens on the robot.
This is powered off.
We have a main screen.
You have two screens on the side.
And then we have obviously a bunch of cameras
and sensors in the head.
So on the screens, we basically can do anything.
You could watch a Netflix movie.
Other way, the brain.
Looking to my eyes.
Yeah, whatever you want.
Like kids get bored.
It's like, let's throw some up there.
So the brain is read in here,
which makes a ton of sense to me.
Yeah.
And it's where the Romans, ancient Romans thought.
You basically need a lot of onboard computation.
There is nowhere else to put it right now.
Yeah, exactly.
And yeah, and also it's easier to vent the heat from here too.
And then you just put all the sensors up here and that totally makes sense.
I guess I could put a latex face over the head if I want it.
Yeah, you can like basically put a silicon face and put hair on it.
We're good to go.
We also have other outfits.
This is one of our logistics spots.
Same robot.
Basically, we're able to outfit it with different types of soft goods.
And we have another robot here that we basically, basically,
basically I've also put the work that's wearing a jacket. This is like cut resistance. So they all
have different, um, different traits. Some of these gloves are also better for drafting different
materials that might be say dusty or maybe it's like piece of sheet metal or it's slick.
Do you think about operating zero G? You just need a better training set and I think so. Yeah,
I think we really love to run robots in scale and space. You're going to populate the I've got my
zero gear airplane. We could we should we should take it upside. Yeah, let's get these things on
there. Yeah. Well look, we're going to fill data centers in space.
space very soon. Someone who needs to assemble them, like zero G is the operating.
And then we'll get other planets too. It'll be super important. Yes, yes. And then we'll
disassemble a moon and the asteroid belt. And they'll use it for materials.
10%. Alex will love you said that. Let's do it. If you made it to the end of this episode,
which you obviously did, I consider you a moonshot mate. Every week, my moonshot mates and I
spent a lot of energy and time to really deliver you the news that matters. If your subscriber, thank you.
If you're not a subscriber yet, please consider subscribing so you get the news as it comes out.
I also want to invite you to join me on my weekly newsletter called Metatrends.
I have a research team.
You may not know this, but we spend the entire week looking at the Metatrends that are impacting your family, your company, your industry, your nation.
And I put this into a two-minute read every week.
If you'd like to get access to the Metatrends newsletter every week, go to Deamandis.com slash Metatrends.
That's Diamandis.com slash...
Metatrends. Thank you again for joining us today. It's a blast for us to put this together every week.
