a16z Podcast - Fei-Fei Li: World Models and the Multiverse
Episode Date: June 4, 2025What if the next leap in artificial intelligence isn’t about better language—but better understanding of space?In this episode, a16z General Partner Erik Torenberg moderates a conversation with Fe...i-Fei Li, cofounder and CEO of World Labs, and a16z General Partner Martin Casado, an early investor in the company. Together, they dive into the concept of world models—AI systems that can understand and reason about the 3D, physical world, not just generate text.Often called the “godmother of AI,” Fei-Fei explains why spatial intelligence is a fundamental and still-missing piece of today’s AI—and why she’s building an entire company to solve it. Martin shares how he and Fei-Fei aligned on this vision long before it became fashionable, and why it could reshape the future of robotics, creativity, and computational interfaces.From the limits of LLMs to the promise of embodied intelligence, this conversation blends personal stories with deep technical insights—exploring what it really means to build AI that understands the real (and virtual) world.Resources: Find Fei-Fei on X: https://x.com/drfeifeiFind Martin on X: https://x.com/martin_casadoLearn more about World Labs: https://www.worldlabs.ai/ Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
That space, the 3D space, the space out there, the space in your mind's eye,
the spatial intelligence that enable people to do so many things that's beyond language
is a critical part of intelligence.
Vivian links over to me, she's like, you know what we're missing?
And I said, what are we missing? She said, we're missing a world mom.
And I'm like, yes!
We can actually create infinite universes.
Some are for robots.
Some are for creativity.
Some are for socialization.
for travel, some of our storytelling, it suddenly will enable us to live in a multiverse way.
The imagination is boundless.
When we talk about AI today, the conversation is dominated by language.
LLMs, tokens, prompts, but what if we're missing something more fundamental?
Not words, but space.
The physical world we move through in shape.
My guest today think we are.
Fei-Fei, a pioneer in modern AI, helped usher in the deep learning era by putting data at the center
of machine learning. Now she's co-founder and CEO of World Labs, building world models,
AI systems that perceive and act in 3D space. She's joined by A16Z general partner,
Martine Casado, computer scientist, VP founder, and one of the first people Fei-Fa called
when forming the company. Today, they explain why spatial intelligence is core to general
intelligence and why it's time to go beyond language. Let's get into it.
As a reminder, the content here is for informational purposes only, should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
For more details, including a link to our investments, please see A16Z.com forward slash disclosures.
Pfei, thank you so much for joining us here today.
Martine, why don't you briefly brag on behalf of Pfei a little bit,
and how would you summarize your contributions to AI for people unfamiliar?
Yeah, well, someone that doesn't need a lot of introduction,
and she's done so many things that I can't fill in,
so maybe I'll just do the ones that appropriate to this.
Of course, she was on the Twitter board.
She was a Google exec.
Founder of CEO of World Labs, but very, very importantly,
like we all know AI, and we all talk about kind of neural networks,
and there's a number of people that focused on making those effective.
But Fafi really singularly brought in data to the equation,
which now we're recognizing is actually probably the bigger problem,
the more interesting one.
And so she truly is the godmother of AI, as everybody calls her.
And Fafi, why did you have to have Martin as the first investor?
Well, first of all, I knew Martin for more than a decade.
You know, I joined Stanford in 2009 as a young assistant professor,
and Martin was finishing his PhD there.
So I always know.
And of course, Martin's advisor, Nick McHughan was a good friend.
and I always know Martin Wanton
to become a very successful entrepreneur
and very successful investor.
So we see each other, we talk about things,
but as I was formulating the idea of World Labs,
I was looking for what I would call my unicorn investor.
I don't know if that's a word,
but that's how I think about this,
who is not only obviously a very established
and successful investor
who can be with entrepreneurs
on this journey through the ups and.
and downs, who can be very insightful, who can bring a kind of knowledge, advice, resource.
But I was also particularly looking for an intellectual partner.
Because what we are doing at World Labs is very deep tech.
We are trying to do something no one else has done.
We know, with a lot of conviction, it will change the world, literally.
But I need someone who is a computer scientist, who is a student of AI, understand product,
market, customers, go to market, and just can be on the phone or in person with me
every moment of the day as an intellectual partner. And here we are. We talk almost every new day.
It is true. Yeah. Amazing. It's actually the origin story of us first connecting is actually
pretty interesting. So Fafi has clearly been thinking about this idea for a very long time,
like well before started there. So maybe years even. And she's this very deep intuition of what
AI needs in order to basically navigate the world, right?
But we were at one of Mark's fancy lunches, and there's a bunch of AI people,
and everybody was so excited about LLMs, right?
And it was talking about language.
And I'd come to this independent conclusion just because I've actually done a lot of image investing.
That, like, that wasn't the end of the story.
And so if Fafi were in the end of this table, all these people talking about it,
and Fafi leaned over to me, she's like, you know what we're missing?
I said, we're missing a world model.
And I'm like, yes.
And it fell into place then because I'd been like thinking about stuff at a high level.
But as she does, she just kind of perfectly articulated this.
So she had a year's worth of thinking about this and talked to people, et cetera.
And so in some way, we kind of, in our own crooked past, had arrived at a very similar intuition.
Hers was like way more filled out.
Mine was just this kind of fancy thing.
But then after that, we actually had a number of conversations where we both agreed that we were aligned on this kind of idea.
Actually, I don't know if you know this.
So, of course, during that lunch, we hit it off on this world model idea.
But I was at that point already talking to various people, not just computer scientists, technologists, but
also investors, potentially business partners. And to be honest, most people didn't get it.
You know, when I say world model, they nod, but I can just tell. That was just a polite nod.
So I called Martin. I'm like, do you mind coming over to Stanford campus? I have coffee with me?
Cooper Cafe. And then as soon as Martin came and sat down and I said, Martin, can you define your
world model to me? I really wanted to hear if Martin actually meant it. And
And the way he defined it about an AI model that truly understand the 3D structure shape
and the compositionality of the world was exactly what I was talking about.
And I was like, wow, he's the only person so far I've talked to who actually meant it.
It's not just nodding.
Wow.
Okay, so we're going to get to World Labs and the specifics of this.
But first, I want to take you back both to your PhD days, your professor days,
and reflect on if you could go back in time and sort of have knowledge of what.
what's happened the preceding 10 years in AI,
what do you think would have been the biggest surprises
or what's the thing that you didn't see coming
that would have shocked your younger self?
Or did you have a good sense of how this field is going to play out?
Yeah, it's ironic to say because, as Martin said,
I was the person who brought data into the AI world,
but I still continue to be so surprised,
not surprised intellectually,
but surprised emotionally
that the data-hungry model,
The data-driven AI can come this far and genuinely have incredible emergent behaviors of thinking machine, right?
Yeah.
Let's get into the specifics.
Why start another foundation model company?
Why aren't LOMs enough?
My intellectual journey is not about company or papers.
It's about finding the North Star problem.
So it's not like I woke up and say, I have to do a company.
I woke up or every day, day after day for the past few years, thinking,
that there's so much more than language.
The language is an incredibly powerful encoding of thoughts and information,
but it's actually not a powerful encoding of what the 3D physical world,
that all animals and living things living.
And if you look at human intelligence, so much is beyond the realm of language.
Language is a lossy way to capture the world.
and also one subtlety of language is purely generative.
Language doesn't exist in nature.
We look around, there's not a syllabus or word,
whereas the entire physical, perceptual, visual world is there,
and animals' entire evolutionary history
is built about so much perceptual and eventually embody intelligence.
Humans, not only we survive, live, work,
but we build civilization beyond language upon constructing the world and changing the world.
So that's the problem I want to tackle.
And in order to tackle that problem, obviously research was important,
and I spent years doing that as an academic.
And it's still fun, but I do realize, and especially talking to Martine,
that the time has come that concentrated industry-grade effort,
focus effort in terms of compute data talent
is really the answer to bringing this to life.
And that's why I wanted to start World Labs.
Amazing.
Yeah, Eric, you can do a very simple thought experiment
that kind of highlights the difference
between language and space.
So if I put you in a room and I blindfolded you
and I just described the room
and then I asked you to do a task,
the chances of you being able to do it are very little.
I'm like, oh, 10 foot in front of you is like a cop.
Like, you know, like this is just, it's this very inaccurate way to convey reality
because reality is so complex and it's so exact, right?
On the other hand, if I took off the blindfold and you can see the actual space, right?
And what your brain is doing is actually reconstructing the 3D, right?
Then you can actually go and manipulate things and touch things, right?
And so one way to think about is we do a lot of language processing and we use that to communicate
and the high-level ideas, et cetera.
But when it comes to navigating the actual world, we really, really rely on the world itself.
our ability to reconstruct that.
And how and when did you realize that language
weren't enough? Because it seems like it's not
super widely known. I don't hear about this all the time.
Well, if you ask me, like, what is this
surprising breakthrough?
It's that language went first because we've worked
so hard on robotics, right? I mean, I feel
even to look at autonomous vehicles, as an industry,
we've invested, like, $100 billion in it.
I remember when Sebastian Thrun, like, actually won
the DARPA Grand Challenge in 2006.
And we're like, hooray!
AV is done, right?
And then 20 years later, like, we're finally there, $100 billion in, et cetera.
This is like a 2D problem.
And so that was the path we're going on,
is do you actually solve, like, world navigation?
And it's harder than out of nowhere comes these LLMs.
And they are unit economic positive.
They solve all of these language problems, like, basically immediately.
And so it just took me a moment.
Actually, Faye Faye said it beautifully early on when we were talking,
which is the part of our brain that actually deals
with language is actually pretty recent.
And so we're actually pretty inefficient at it, right?
And so the fact that a computer does it better
is not super surprising.
But the part of the brain that actually does the navigation,
you know, the spatial has been around,
it's a mammalian brain.
It's maybe the reptilian brain has been about four million years.
It's even more than that.
It's a trilobai break.
Yeah, yeah, right.
If a trial by head break.
Right.
500 million years.
Yeah.
So it's almost like we're unrolling evolution, right?
So the language part is actually very, very important
for like high-level concepts and like the laptop class-type work,
which is what it's impacting right now.
but when it comes to space.
And this is everything from robotics,
so anything where you're trying to construct something physical,
you have to solve this problem.
And then we know from AV that it's a very tough problem.
And then maybe this is worth talking about.
Like the generative wave gave us some insight
in how you might want to do it.
So it really felt like that was the time.
My journey is very different because I've always been envisioned, right?
So I feel like I didn't need LLM to convince me.
LWM is important.
I do want to say, we're not here bashing language.
I'm just so excited.
In fact, seeing Chad GBT and LLMs and these foundation models,
having such breakthrough success, inspires us to realize the moment is closer for world models.
But Martin said it so beautifully, it's that space, the 3D space, the space out there,
the space in your mind's eye, the spatial intelligence that enable people to do so many things
that's beyond language is a critical part of intelligence.
It goes from ancient animals all the way to humanity's most innovative findings,
such as the structure of DNA, right?
That double helix in 3D space, there's no way you can use language alone to reason that out.
So that's just one example.
Another one of my favorite scientific example is Buckyball.
Carbon molecule structure that is so beautifully constructed.
That kind of example shows how,
incredibly profound space and 3D world is.
Let's paint even more of a picture.
When World Labs has achieved its vision or language model models have achieved their vision,
what are some applications or use cases that we can present to the audience to help make it concrete?
Yeah, there is a lot, right?
For example, creativity is very visual.
We have creators from design to movie to architecture to industry design.
The creativity is not just only for entertainment.
It could be for productivity, for machinery.
for many things, that alone is a highly visual, perceptual, spatial area or areas of work.
Of course, we mentioned robotics.
Robotics to me is any embodied machines.
It's not just humanoid or cars.
There's so much in between, but all of them have to somehow figure out the 3D space
it lives in, have to be trained to understand the 3D space,
and have to do things, sometimes even collaboratively with humans,
and that needs spatial intelligence.
And of course, I think one thing that's very exciting for me
is that for the entirety of human civilization,
we all collectively as people lived in one 3D world,
and that is the physical Earth 3D world.
A few of us went to the moon, but it's a very small,
number. But that's one world. But that's what makes the digital virtual world incredible. With
this technology, which we should talk about, it's the combination of generation and
reconstruction. Suddenly, we can actually create infinite universes. Some are for robots. Some are for
creativity. Some are for socialization. Some are for travel. Some are for storytelling. It suddenly
will enable us to live in a multiverse way.
imagination is boundless.
I think it's very important
because these conversations
can sound abstract, but they're actually not.
But the reason they sound abstract
is because it's truly horizontal,
just like LLMs are, right?
So like if you guys say, like,
what are LLMs good at?
The same LLM we use for like an emotional conversation,
we use it to write code.
We use to do lists.
We use it for self-actualization, right?
And so I think we can get actually
pretty concrete about what these models do, right?
And so let me just give it a shot,
and then Fife is the expert, of course.
So with these models,
you can take a view of the world, like a 2D view of the world,
and then you could actually create a 3D full representation,
including what you're not seeing, like the back of the table, for example,
within the computer.
So given just a 2D view, you have the full thing,
and then you ask, okay, well, what can you do with that thing, for example?
Well, you can manipulate it, you can move it, you can measure it, you can stack.
So anything that you would do a space you could do, right?
I mean, you could do architecture, you could design.
But it turns out the ability to fill out the back of the table
means that you can fill out stuff
that was never there to begin with, right?
So let's say that I just had a 2D picture of this.
I could create a 360 of everything, right?
And so now you have fully generative...
And so what does that mean?
That means that's video games, it's creativity.
And so it's a super horizontal piece
that takes basically a computer
with a single view in the world
or maybe multiple views in the world
and creates a full 3D representation
that that computer then can act on.
And so you can see that that's a very concrete, pivotal thing
from everything from, like, robotics
to video games to art and design.
Yeah.
It seems like we haven't fully been appreciating
sort of the 3D components until now.
Is that fair to say?
It is fair to say.
In fact, I think it took evolution a long time.
3D is not a easy problem,
but I always come back to the fact
that I had a conversation with my 6-year-old years ago
about why trees don't have eyes.
And the fundamental thing is trees don't move.
They don't need eyes.
So the fact that,
the entire basis of animal life is moving and doing things and interacting
gives life to perception and spatial intelligence.
And in turn, spatial intelligence is going to reinvent horizontally, as Martin said,
so many of the way of work and life that humans are doing.
Yeah, fascinating.
But it is definitely worth asking the question,
why can't you just use 2D video for this, right?
3D is very, very fundamental to this.
Vivi, you suggested let's get deeper into the technology.
What can we share more about how it works or what the breakthrough is
or what's worth commenting on the technology?
To Martin's point, does it need to be 3D or what can you just use 2D?
I think you can do a lot of things using 2D.
But the fact is that 2D will get you very far.
In fact, today's multimodal LLMs is already making a big difference in the robotic learning world.
or be guiding you to know what's next, the state of the world.
But fundamentally, physics happens in 3D and interaction happens in 3D.
Navigating behind the back of the table needs to happen in 3D.
Composing the world, whether physically, digitally, needs to happen in 3D.
So fundamentally, the problem is a 3D problem.
One way to think about it is if it's a human being looking at, say, a 2D video,
So the human being can reconstruct the 3D in their head, right?
But let's say I've got a robot that has the output of the model.
If that's 2D and then you ask the robot to do, I don't know, distance or to grab something,
that information's missing.
You've got the X, Y, Z plane just isn't there at all, right?
And so for many things that are spatial, you need to provide that information to the computer
so that you can actually navigate in 3D space.
And so 2D video is great if it's a human because we already can turn it into 3D,
but for any computer program, it'll need to be 3D.
Actually, I want to tell you a personal story about five years ago.
Ironically, I lost my stereo vision for a few months
because I had a cornea injury.
And that means I was literally seen with one eye.
And like Martin said, my whole life has been trained with stereo vision.
So even if I was seen with one eye, I kind of know what the 3D world looked like.
But it was a fascinating period as a computer vision scientist for me to experiment.
what the world is.
And one thing that truly drove home, literally, is I was frightened to drive.
Wow.
First of all, I couldn't get on highway.
That speed, I could not, you know.
But I was just driving in my own neighborhood, and I realized I don't have a good distance
measure between my car and the parked car on a local small road.
Even though I have perfect understanding of how big is my car almost, how big is the neighbors,
the parked cars.
I know the roads for years and years,
but just driving there,
I had to be so slow,
like almost 10 miles an hour
so that I don't scratch the cars.
And that was exactly why we needed stereo vision.
That's actually a great articulation of why 3D is just actually key
if you're doing some processing, right?
Yeah, so I don't recommend it,
but if you're staring, park your car one and drive your car two
with one eye and feel it.
That's your own car.
On the tech side, with LLMs, a lot of the research was done
of the big companies, what's the state of the research here?
This is definitely a newer area of research compared to LLM.
It's not totally fair to say new because in computer vision as a field, we have been doing bits
and pieces.
For example, one important revolution that has happened in 3D computer vision was a neural radiant
field or nerve, and that was done by our co-founder, Ben Mildenhall and his colleagues at Berkeley.
And that was a way to do 3D reconstruction using deep learning
that was really taking the world by storm about four years ago.
We've also got a co-founder, Christopher Laster,
whose pioneering work was part of the reason Gaussian splat representation
started to, again, become really popular as a way to represent volumetric 3D.
And of course, Justin Johnson, who was my former student,
also co-founder of World Labs,
were among the first generation of deep learning computer vision student
who did so much foundational work in image generation
when before Transformer were out,
we were using GANS to do image generation
and then style transfer,
which was really popularized some of the components
or ingredients of what we're doing here.
So things were happening in academia,
things were happening in industry.
But I agree what is exciting now is that at World Lab,
we just have the conviction that we're going to be all in
on this one singular big North Star problem,
concentrating on the world's smartest people in computer vision,
in diffusion models, in computer graphics, in optimization, in AI, in data.
All of them come into this one team.
and try to make this work and to productize this.
I will say from an outsider standpoint,
and so I'm not an expert in any of these spaces,
but it really feels like to solve this problem,
you need experts both in AI,
and that's like the data and the models,
like the actual model architecture and graphics,
which is like how do you actually represent these things
in memory, in a computer, and then on the screen?
It's a very special team to actually crack this problem,
which Fifei has managed to put together.
Well, that's an inspiring note to wrap up,
Thank you so much for joining us.
Thank you. Thank you, Eric.
Thanks for listening to the A16Z podcast.
If you enjoyed the episode, let us know by leaving a review at rate thispodcast.com slash A16Z.
We've got more great conversations coming your way.
See you next time.
