a16z Podcast - The Frontier of Spatial Intelligence with Fei-Fei Li
Episode Date: September 19, 2024Fei-Fei Li and Justin Johnson are pioneers in AI. While the world has only recently witnessed a surge in consumer AI, our guests have long been laying the groundwork for innovations that are transform...ing industries today.In this episode, a16z General Partner Martin Casado joins Fei-Fei and Justin to explore the journey from early AI winters to the rise of deep learning and the rapid expansion of multimodal AI. From foundational advancements like ImageNet to the cutting-edge realm of spatial intelligence, Fei-Fei and Justin share the breakthroughs that have shaped the AI landscape and reveal what's next for innovation at World Labs.If you're curious about how AI is evolving beyond language models and into a new realm of 3D, generative worlds, this episode is a must-listen.Resources: Learn more about World Labs: https://www.worldlabs.aiFind Fei-Fei on Twitter: https://x.com/drfeifeiFind Justin on Twitter: https://x.com/jcjohnssFind Martin on Twitter: https://x.com/martin_casadoStay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
This is fundamentally philosophically to be a different problem.
The previous decade had mostly been about understanding data that already exists.
But the next decade was going to be about understanding new data.
Visual, spatial intelligence is so fundamental.
It's as fundamental as language.
It is like unwrapping presents on Christmas, that every day you know there's going to be some amazing new discovery,
some amazing new application or algorithm somewhere.
If we see something or if we imagine,
Can imagine something. Both can converge towards generating it.
I think we're in the middle of a Cambrian explosion.
To many, the last two years of AI have felt like a light switch, pre-imposed GPT3,
pre-imposed being able to generate an image with natural language,
and even pre-imposed translating any video with the click of a button.
But to some, like Dr. Fei-Fei Lee, often referred to as the quote,
godmother of AI, and longtime professor of computer science at Stanford, who, by the way, taught
some very well-known researchers like Andre Carpathie. To people like Fei-Fei, artificial
intelligence unlocks have existed on a multi-decade-long continuum. And that continuum is destined
to proceed into the physical, spatial world. At least, that's what Fifei and her co-founders of
new company, World Labs, believe. And these four founders pioneered the ecosystem in so many ways,
from Fayfay's image net to Justin Johnson's work on scene graphs,
Ben Mildenhall's work on Nerfs,
or even Christoph Lassner's work on the precursor to the Gossian splat.
And in today's episode, you'll get to hear from Fei-Fei and Justin
as they explore this evolution with A16Z general partner, our team Casado.
From the very earliest seeds to the recent explosion of consumer-grade AI applications
and the key watershed moments along the way.
We'll, of course, dive into the wine now behind World Labs,
but also their choice to focus on spatial intelligence
and what it might really take to build at that frontier,
from algorithmic unlocks to hardware.
All right, let's get started.
As a reminder, the content here is for informational purposes only,
should not be taken as legal, business, tax, or investment advice,
or be used to evaluate any investment or security
and is not directed at any investors or potential investors in any A16C fund.
Please note that A16C and its affiliates
may also maintain investments in the companies discussed in this podcast.
For more details, including a link to our investments,
please see A16C.com slash disposures.
Over the last two years,
we've seen this kind of massive rush of consumer AI companies and technology,
and it's been quite wild,
but you've been doing this now for decades.
And so maybe we walked through a little bit about how we got here,
kind of like your key contributions and insights along the way.
So it is a very exciting moment, right?
Just zooming back, AI is in a very exciting moment.
I personally have been doing this for two decades plus,
and we have come out of the last AI winter.
We have seen the birth of modern AI.
Then we have seen deep learning taking off,
showing us possibilities like playing chess.
But then we're starting to see the deepening of the technology
and the industry adoption of some of the,
earlier possibilities like language models. And now I think we're in the middle of a Cambrian
explosion in almost a literal sense because now in addition to texts, you're seeing pixels,
videos, all coming with possible AI applications and models. So it's a very exciting moment.
I know you both so well. And many people know you both so well because you're so prominent in
the field, but not everybody grew up in AI. So maybe it's kind of worth just going through like
your quick backgrounds, just to kind of level set the audience.
Yeah, sure. So I first got into AI at the end of my undergrad. I did math and computer science for undergrad at Caltech. It was awesome. But then towards the end of that, there was this paper that came out that was at the time, a very famous paper, the cat paper from Hongak Lee and Andrew Ng and others that were at Google Brain at the time. And that was like the first time that I came across this concept of deep learning. And to me, it just felt like this amazing technology. And that was the first time that I came across this recipe that would come to define the next more than decade of my life, which is that you can get these amazingly powerful learning
algorithms that are very generic, couple them with very large amounts of compute, couple them
with very large amounts of data, and magic things started to happen when you compile those
ingredients. So I first came across that idea around 2011, 2012-ish, and I just thought, oh, my God,
this is going to be what I want to do. It was obvious you've got to go to grad school to do this stuff,
and then saw that Fei-Fei was at Stanford, one of the few people in the world at the time
who was on that train. And that was just an amazing time to be in deep learning and computer vision
specifically, because that was really the era when this went from these first nascent bits of
technology that were just starting to work and really got developed and spread across a ton of
different applications.
So then over that time, we saw the beginnings of language modeling.
We saw the beginnings of discriminative computer vision, where you could take pictures
and understand what's in them in a lot of different ways.
We also saw some of the early bits of what we would now call Gen.
Generative modeling, generating images, generating text.
A lot of those core algorithmic pieces actually got figured out by the academic community during
my PhD years. It was a time I would just wake up every morning and check the new papers on
archive and just be ready. It's like unwrapping presents on Christmas. Every day you know there's
going to be some amazing new discovery, some amazing new application or algorithm somewhere in the
world. In the last two years, everyone else in the world kind of came to the same realization
of using AI to get new Christmas presents every day. But I think for those of us that have been
in the field for a decade or more, we've sort of had that experience for a very long time.
I come to AI through a different angle, which is from physics, because my undergraduate background
was physics, but physics is the kind of discipline that teaches you to think audacious questions
and think about what is the still remaining mystery of the world. Of course, in physics, it's
atomic world, you know, universe and all that. But somehow that kind of training thinking got me
into the audacious question that really captured my own imagination, which is intelligence.
So I did my PhD in AI and computational neural size at Caltech.
So Justin and I actually didn't overlap, but we share the same alma mater at Caltech.
And the same advisor.
Yes, same advisor, your undergraduate advisor, my PhD advisor, Pietro Perona.
And my PhD time, which is similar to your PhD time, was when AI was still in the winter in the public eye.
But it was not in the winter in my eye because it's that pre-stasy.
spring hibernation. There's so much life. Machine learning statistical modeling was really gaining power.
I think I was one of the native generation in machine learning and AI, whereas I look at
justice generation is the native deep learning generation. So machine learning was the precursor of
deep learning, and we were experimenting with all kinds of models. But one thing came out at the end of my
Ph.D. and the beginning of my assistant professor time, there was a overlooked elements of
AI that is mathematically important to drive generalization. But the whole field was not thinking
that way, and it was data. Because we were thinking about the intricacy of Bayesian models or
kernel methods and all that. But what was fundamental that my students and my lab realized probably
earlier than most people is that if you let data drive models, you can unleash the kind of
power that we haven't seen before. And that was really the reason we went on a pretty crazy
bet on ImageNet, which is, you know, just forget about any scale we're seeing now, which is
thousands of data points. At that point, an LP community has their own data sets. I remember
UC Irvine data set or some data set in NLP was, it was small.
Compare Vision community has their data sets, but all in the order of thousands or tens of thousands
were like, we need to drive it to Internet scale.
And luckily, it was also the coming of age of Internet.
So we were riding that wave, and that's when I came to Stanford.
So these epochs are what we often talk about.
ImageNet is clearly the epoch that created, or at least maybe made.
popular and viable computer vision.
In the Gen AI wave, we talk about two kind of core unlocks.
One is the Transformer's paper, which is attention.
We talk about stable diffusion.
Is that a fair way to think about this, which is there's these two algorithmic unlocks
that came from academia or Google, and that's where everything comes from, or has it been
more deliberate?
Or have there been other kind of big unlocks that kind of brought us here that we don't
talk as much about?
I think the big unlock is compute.
I know the story of AI is often the story of compute, but no matter how much people talk
about it, I think people underestimate it.
Right. And the amount of growth that we've seen in computational power over the last decade is astounding.
The first paper that's really credited with the breakthrough moment in computer vision for deep learning was AlexNet, which was a 2012 paper where a deep neural network did really well on the ImageNet challenge and just blew away all the other algorithms that Fayefe had been working on more in grad school.
That AlexNet was a 60 million parameter deep neural network and it was trained for six days on two GTX 580s, which was the top consumer card at the time, which came out in.
2010. So I was looking at some numbers last night just to put these in perspective. And the newest
latest and greatest from NVIDIA is the GB200. Do either of you want to guess how much raw compute
factor we have between the GtX580 and the GB200? Shoot, no, what? Go for it. It's in the
thousands. So I ran the numbers last night, that two-week training run, that of six days on two
GtX-580s, if you scale, it comes out to just under five minutes on a single GB-200.
Justin is making a really good point.
The 2012 AlexNet paper on ImageNet Challenge is literally a very classic model, and that is the convolutional neural network model.
And that was published in 1980s, the first paper I remember as a graduate student learning that.
And it more or less also has six, seven layers.
Practically the only difference between AlexNet and the ConvNet, the difference.
The difference is the two GPUs and the deluge of data.
Yeah.
So I think most people now are familiar with, quote, the bitter lesson.
And the bitter lesson says is if you make an algorithm, don't be cute,
just make sure you can take advantage of available compute because the available compute will show up.
On the other hand, there's another narrative, which seems to me to be just as credible,
which is it's actually new data sources that unlock deep learning, right?
Like ImageNet is a great example.
Self-attention is great from transformers, but they'll also say this is a way you can exploit human labeling of data
because it's the humans that put the structure in the sentences.
And if you look at Clip, let's say, well,
we're using the Internet to actually have humans use the alt tag to label images, right?
And so, like, that's a story of data.
That's not a story of compute.
And so is the answer just both or is, like, one more than the other?
I think it's both.
But you're hitting another really good point.
So I think there's actually two epochs that, to me, feel quite distinct in the algorithms here.
So, like, the ImageNet era is actually the era of supervised learning.
So in the era of supervised learning, you have a lot of data,
but you don't know how to use data on its own.
Like the expectation of ImageNet and other data sets of that time period
was that we're going to get a lot of images,
but we need people to label everyone.
And all of the training data that we're going to train on,
a human labeler has looked at everyone and said something about that image.
And the big algorithmic unlocks,
we know how to train on things that don't require human labeled data.
As the naive person in the room that doesn't have an AI background,
it seems to me if you're training on human data,
the humans have labeled it.
It's just not explicitly.
I knew you were going to say that, Mark.
I knew that.
Yes, philosophically, that's a really important question.
But that actually is more true in language than pixels.
Fair enough.
Yeah, yeah, yeah, yeah.
But I do think it's an important distinction because clip really is human labeled.
I think intention is humans have, like, figured out the relationships of things, and then you learn them.
So it is human label, just more implicit than explicit.
Yeah, it's still human labeled.
The distinction is that for this supervised learning era, our learning tasks were much more constrained.
So you would have to come up with this ontology of concepts that we want to do.
discover, right? If you're doing ImageNet, FaithA and your students at the time spent a lot of time thinking about which thousand categories should be in the ImageNet challenge.
Other data sets of that time, like the COCO data set for object detection, they thought really hard about which 80 categories we put in there.
So let's walk to Gen AI. So when I was doing my PhD before that you came, so I took machine learning from Andering, and then I took Asian something very complicated from Daphne Collar and it was very complicated for me.
A lot of that was just predictive modeling. And then I remember the whole kind of vision stuff that you unlock. But then the generative
stuff has shown up, I would say, in the last four years, which is, to me, very different.
You're not identifying objects. You're not predicting something. You're generating something.
And so maybe kind of walk through, like, the key unlocks that got us there, and then why it's
different. And if we should think about it differently, and is it part of a continuum? Is it not?
It is so interesting. Even during my graduate time, generated model was there.
We wanted to do generation. Nobody remembers, even with letters and numbers, we were trying to
to do some.
Jeff Hinton has had generated papers.
We were thinking about how to generate.
And in fact, if you think from a probability distribution point of view, you can mathematically
generate.
It's just nothing we generate would ever impress anybody, right?
So this concept of generation mathematically theoretically is there, but nothing worked.
Just as PhD, his entire PhD, is a story, almost a mini story of the trajectory of the
field. He started his first project in data. I forced them to. He didn't like it.
In retrospect, I learned a lot of really useful things. I'm glad you say that now.
So actually, my first paper, both of my PhD and ever, my first academic publication ever,
was the image retrieval with scene graphs. And then we went into taking pixels, generating words,
and Justin and Andre really worked on that. But that was still a very, very lossy way of
generating and getting information out of the pixel world.
And then in the middle, Justin went off and did a very famous piece of work.
And it was the first time that someone made it real time, right?
Yeah, yeah. So the story there is there was this paper that came out in 2015, a neural
algorithm of artistic style led by Leon Gaddis. And the paper came out and they showed
these real world photographs that they had converted into Van Gogh style. And we are kind
of used to seeing things like this in 2024, but this was in 2015. So this paper
just popped up on archive one day, and it blew my mind. I just got this genii brainworm
in my brain in 2015, and it did something to me. And I thought, oh, my God, I need to understand
this algorithm. I need to play with it. I need to make my own images into Van Gogh. So then I, like,
read the paper, and then over a long weekend, I re-implemented the thing and got it to work.
It was actually very simple algorithm. So, like, my implementation was, like, 300 lines of Lua,
because at the time it was pre-Py torch. It was Lua. This was pre-Py torch. So we were using
Lua torch. But it was like very simple algorithm, but it was slow, right?
So it was an optimization-based thing.
Every image you want to generate, you need to run this optimization loop,
run this gradient-distance loop for every image that you generate.
The images were beautiful, but I just wanted to be faster.
And Justin just did it.
And it was actually, I think, your first taste of an academic work having an industry impact.
A bunch of people had seen this artistic style transfer stuff at the time.
And me and a couple others at the same time came up with different ways to speed this up.
But mine was the one that got a lot of traction.
Before the world to understand Gen.
Justin's last piece of work in PhD was actually inputting language and getting a whole picture out.
It's one of the first Gen. AI work is using GAN, which was so hard to use.
The problem is that we are not ready to use a natural piece of language.
So Justin, you heard he worked on Syngraph.
So we have to input a Syngraph language structure.
So the sheep, the grass, the sky in the graphway, it literally was one of our photos, right?
And then he and another very good master's student, Grimm, they got that again to work.
So you can see from data to matching, to style transfer, to generative images, we're starting to see.
You ask if this is a rub change.
For people like us, it's already happening in a continuum.
But for the world, the results are more abrupt.
So I read your book, and for those that are listening, it's a phenomenal book.
I really recommend you read it.
And it seems for a long time, like a lot of the, and I'll talk to you, Fafi, like a lot of your research has been,
and your direction has been towards kind of spatial stuff and pixel stuff and intelligence.
And now you're doing world labs, and it's around spatial intelligence.
And so maybe talk through, is this been part of a long journey for you?
Like, why did you decide to do it now?
Is it a technical unlock?
Is it a personal unlock?
Move us from that meilu of AI research to World Labs.
For me, it is both personal and intellectual, right?
My entire intellectual journey is really this passion to seek North Stars,
but also believing that those North Stars are critically important
for the advancement of our field.
So at the beginning, I remembered after graduate school,
I thought my North Star was telling stories of images, because for me, that's such an important
piece of visual intelligence. That's part of what you call AI or AGI. But when Justin and Andre
did that, I was like, oh, my God, that was my live stream. What do I do next? So it came a lot faster.
I thought it would take 100 years to do that. But visual intelligence is my passion because I do
believe for every intelligent being like people or robots or some other form, knowing how to
see the world, reason about it, interact in it, whether you're navigating or manipulating or
making things, you can even build civilization upon it. Visual, spatial intelligence is
so fundamental. It's as fundamental as language, possibly.
more ancient and more fundamental in certain ways.
So it's very natural for me that our North Star is to unlock spatial intelligence.
The moment to me is right.
We've got these ingredients.
We've got compute.
We've got much deeper understanding of data, way deeper the image that days.
Compared to those days, we're so much more sophisticated.
And we've got some advancement of algorithms, including co-founders,
in World Lab, like Ben Mildenhall and Christoph Lassner,
they were at the cutting edge of nerve that we are in the right moment
to really make a bet and to focus and just unlock that.
So I just want to clarify it for folks that are listening to this.
You're starting this company, World Lab,
spatial intelligence is kind of how you're generally describing the problem you're solving.
Can you maybe try to crisply describe what that means?
Yeah, so spatial intelligence is about machines' ability to perceive, reason, and act.
in 3D space and time to understand how objects and events are positioned in 3D space and time,
how interactions in the world can affect those 40 positions over space time,
and both sort of perceive, reason about, generate, interact with,
really take the machine out of the mainframe or out of the data center
and putting it out into the world and understanding the 3D, 4D world with all of its richness.
So to be very clear, are we talking about the physical world or are we just talking about an abstract notion of world?
I think it can be both. I think it can be both, and that encompasses our vision long term.
even if you're generating worlds,
even if you're generating content positioned in 3D has a lot of benefits.
Or if you're recognizing the real world,
being able to put 3D understanding into the real world as well is part of it.
Just for everybody listening,
the two other co-founder, Ben Milton Hall and Christoph Lassner,
are absolute legends in the field at the same level.
These four decided to come out and do this company now.
And so I'm trying to dig to why now is the right time.
Yeah, I mean, this is, again, part of a longer evolution for me.
But post-PHD, when I was really wanting to develop into my own independent researcher, both for my later career,
I was just thinking, what are the big problems in AI and computer vision?
And the conclusion that I came to about that time was that the previous decade had mostly been about understanding data that already exists.
But the next decade was going to be about understanding new data.
And if we think about that, the data that already exists was all of the images and videos that maybe existed on the web already.
And the next decade was going to be about understanding new data.
Right?
People have smartphones.
smartphones are collecting cameras. Those cameras have new sensors. Those cameras are positioned
in the 3D world. It's not just you're going to get a bag of pixels from the internet and know
nothing about it and try to say if it's a cat or a dog. We want to treat these images as universal
sensors to the physical world. And how can we use that to understand the 3D and 40 structure
of the world, either in physical spaces or generative spaces? So I made a pretty big pivot post-PHD
into 3D computer vision, predicting 3D shapes of objects with some of my colleagues at fair at the time.
Then later, I got really enamored by this idea of learning 3D structure through 2D, right?
Because we talk about data a lot.
3D data is hard to get on its own, but because there's a very strong mathematical connection
here, our 2D images are projections of a 3D world.
And there's a lot of mathematical structure here we can take advantage of.
So even if you have a lot of 2D data, there's a lot of people who've done amazing work
to figure out how can you back out the 3D structure of the world from large quantities of
2D observations.
And then in 2020, you asked about breakthrough moments.
there was a really big breakthrough moment from our co-founder, Ben Mildenhall at the time,
with his paper Nerf, Neural Radiance Fields.
And that was a very simple, very clear way of backing out 3D structure from 2D observations.
That just lit a fire under this whole space of 3D computer vision.
I think there's another aspect here that maybe people outside the field don't quite understand.
That was also a time when large language models were starting to take off.
So a lot of the stuff with language modeling actually had gotten developed in academia.
even during my Ph.D., I did some early work with Andre Carpathie on language modeling in 2014.
LSTM, I still remember.
LSTM, RNs, GRUs, like, this was pre-transformer.
But then at some point, like, around the GPT2 time, like, you couldn't really do those kind of models anymore in academia
because they took way more resourcing.
But there was one really interesting thing.
The NERF approach that Ben came up with, like, you could train these in a couple hours on a single GPU.
So I think at that time, there was a dynamic here that happened, which is that I think a lot of academic researchers ended up focusing
a lot of these problems because there was core algorithmic stuff to figure out and because you could
actually do a lot without a ton of compute and you could get state of the art results on a single
GPU. Because of those dynamics, there was a lot of research, a lot of researchers in academia
were moving to think about what are the core algorithmic ways that we can advance this area as well.
Then I ended up chatting with Fei-Fey more and I realized that we were actually, she's very
convincing. She's very convincing. Well, there's that, but we talked about trying to figure out
your own independent research trajectory from your advisor.
Well, it turns out we ended up kind of concluding on similar things.
Okay, well, from my end, I want to talk to the smartest person.
I call Justin.
There's no question about it.
I do want to talk about a very interesting technical story of pixels that most people
working language don't realize is that pre-gen-AI era in the field of computer vision,
those of us who work on pixels, we actually have a long-histor.
in an area of research called Reconstruction, 3D Reconstruction.
It dates back from the 70s.
You can take photos because humans have two eyes, right?
So in general, it starts with stereo photos, and then you try to triangulate the geometry
and make a 3D shape out of it.
It is a really, really hard problem.
To this day, it's not fundamentally solved because there's correspondence and all that.
So this whole field, which is an older way of thinking about 3D, has been going around it,
and it has been making really good progress.
But when there's happened in the context of generative methods, in the context of diffusion models,
suddenly reconstruction and generations start to really merge.
Now, within really a short period of time, in the field of computer vision, it's hard to talk about reconstruction versus
it's generation minimal.
We suddenly have a moment where if we see something or if we imagine something, both can converge
towards generating it.
And that's just to me a really important moment for computer vision, but most people miss
it because we're not talking about it as much as LLMs.
Right.
So in pixel space, there's reconstruction where you reconstruct like a scene that's real.
And then if you don't see the scene, then you use generative techniques, right?
So these things are kind of very similar.
Throughout this entire conversation, you're talking about languages and you're talking about pixels.
So maybe it's a good time to talk about how, like, spatial intelligence and what you're working on, contrasts with language approaches, which of course are very popular now.
Is it complementary? Is it orthogonal?
I think they're complementary.
I don't mean to be too leading here. Maybe just contrast them.
Like, everybody says, I know opening eye and I know GPT and I know multimodal models.
And a lot of what you're talking about is, like, they've got pixels and they've got languages.
And doesn't this kind of do what we want to do with spatial rebuttal.
reasoning. Yeah, so I think to do that, you need to open up the black box a little bit of how
these systems work under the hood. So with language models and the multimodal language models that
we're seeing nowadays, their underlying representation under the hood is a one-dimensional
representation. We talk about context lengths, we talk about transformers, we talk about sequences,
attention. Fundamentally, their representation of the world is one-dimensional. So these
things fundamentally operate on a one-dimensional sequence of tokens. So this is a very natural
representation when you're talking about language, because written text is a one-dimensional
sequence of discrete letters. So that kind of underlying representation is the thing that led to
LLMs. And now the multimodal LLMs that we're seeing now, you kind of end up shoehorning the other
modalities into this underlying representation of a 1D sequence of tokens. Now, when we move to
spatial intelligence, it's kind of going the other way, where we're saying that the three-dimensional
nature of the world should be front and center in the representation. So at an algorithmic
perspective, that opens up the door for us to process data in different ways, to get different
kinds of outputs out of it and to tackle slightly different problems. So even at a course level,
you kind of look at outside and you say, oh, multimodal LMs can look at images too. Well, they can,
but I think they don't have that fundamental 3D representation at the heart of their approaches.
I totally agree with Justin. I think talking about the 1D versus fundamentally 3D representation
is one of the most core differentiation. The other thing is a slightly philosophical,
but it's really important for me, at least, is language is fundamental.
a purely generated signal.
There's no language out there.
You don't go out in the nature
and there's words written in the sky for you.
Whatever data you feed in,
you pretty much can just
somehow regurgitate with enough
generalizability the same data out,
and that's language to language.
But 3D world is not.
There is a 3D world out there
that follows laws of physics
that has its own structures due to materials and many other things.
And to fundamentally back that information out and be able to represent it and be able to generate it
is just fundamentally quite a different problem.
We will be borrowing similar ideas or useful ideas from language and LLMs.
But this is fundamentally philosophically to me a different problem.
So language, 1D, and probably a bad representation of the physical world because it's been generated by humans and it's probably lossy.
There's a whole other modality of generative AI models, which are pixels, and these are 2D image and 2D video.
And like one could say that if you look at a video, you can see 3D stuff because like you can pan a camera or whatever it is.
And so like how would like spatial intelligence be different than say 2D video?
When I think about this, it's useful to disentangle two things.
One is the underlying representation, and then two is kind of the user-facing affordances that you have.
And here's where you can get sometimes confused, because fundamentally we see 2D, right?
Our retinas are 2D structures in our bodies, and we've got two of them.
So fundamentally, our visual system perceives 2D images.
But the problem is that depending on what representation you use, there could be different affordances that are more natural or less natural.
So even if you, at the end of the day, you might be seeing a 2D image or a 2D video, your brain is perceived.
that as a projection of a 3D world.
So there's things you might want to do, move objects around, move the camera around.
In principle, you might be able to do these with a purely 2D representation and model,
but it's just not a fit to the problems that you're asking the model to do.
Modeling the 2D projections of a dynamic 3D world is a function that probably can be modeled.
But by putting a 3D representation into the heart of a model, there's just going to be a better fit
between the kind of representation that the model is working on and the kind of tasks that you want that model to do.
So our bet is that by threading a little bit more 3D representation under the hood, that'll enable better affordances for users.
And this also goes back to the North Star.
For me, why is it spatial intelligence?
Why is it not flat pixel intelligence?
It's because I think the arc of intelligence has to go to what Justin calls affordances.
And the arc of intelligence, if you look at evolution, right, the arc of intelligence, if you look at evolution, right,
the arc of intelligence eventually enables animals and humans, especially human as an
intelligent animal, to move around the world, interact with it, create civilization, create life,
create a piece of sandwich, whatever you do in this 3D world.
And translating that into a piece of technology, that native 3Dness is fundamentally important
for the flood of possible applications,
even if some of them, the serving of them,
looks 2D, but it's innately 3D to me.
I think this is actually very subtle
and incredibly critical point,
and so I think it's worth digging into
and a good way to do this is talking about use cases.
And so just to level set this,
we're talking about generating a technology,
let's call it a model that can do spatial intelligence.
So maybe in the abstract,
What might that look like kind of a little bit more concretely?
There's a couple different kinds of things we imagine these spatially intelligent models able to do over time.
And one that I'm really excited about is world generation.
We're all used to something like a text image generator or starting to see text of video generators,
where you put an image, put in a video, and out pops an amazing image or an amazing two second clip.
But I think you could imagine leveling this up and getting 3D worlds out.
So one thing that we could imagine spatial intelligence helping
us with in the future are up-leveling these experiences into 3D, where you're getting out a full
virtual simulated but vibrant and interactive 3D world, right? Maybe for gaming, maybe for
virtual photography, you name it. Even if you got this to work, there'd be a million applications.
For education. I mean, in some sense, this enables a new form of media, right? Because we already
have the ability to create virtual interactive worlds, but it costs hundreds of millions of
dollars and a ton of development time. And as a result, what are the places that people
people drive this technological ability is video games, right?
But because it takes so much labor to do so,
then the only economically viable use of that technology
in its form today is games that can be sold for $70 apiece
to millions and millions of people to recoup the investment.
If we had the ability to create these same virtual,
interactive, vibrant 3D worlds,
you could see a lot of other applications of this, right?
Because if you bring down that cost of producing
that kind of content, then people are going to use it for other things.
Right? What if you could have sort of a personalized 3D experience that's as good and as rich, as detailed as one of these AAA video games that costs hundreds of millions of dollars to produce?
But it could be catered to this very niche thing that only maybe a couple people would want that particular thing.
That's not a particular product or a particular roadmap.
But I think that's a vision of a new kind of media that would be enabled by spatial intelligence in the generative realm.
If I think about a world, I actually think about things that are not just seen generation.
I think about stuff like movement and physics.
And so, like, in the limit, is that included?
And then if I'm interacting with it, like, are there semantics?
And I mean by that, like, if I open a book, are there, like, pages and are there words in it?
And do they mean, like, are we talking, like, a full-depth experience?
Or are we talking about, like, kind of a static scene?
I think I'll see a progression of this technology over time.
This is really hard stuff to build.
So I think the static problem is a little bit easier.
But in the limit, I think we want this to be fully dynamic, fully interactable, all the things that you just said.
I mean, that's the definition of spatial intelligence.
Yeah.
So there is going to be a progression.
We'll start with more static, but everything you've said is in the roadmap of spatial intelligence.
I mean, this is kind of in the name of the company itself, world labs.
Like the world is about building and understanding worlds.
And this is actually a little bit of inside baseball.
I realized after we told the name to people, they don't always get it.
Because in computer vision and reconstruction and generation, we often make a distinction or a delineation about the kinds of things you can do.
And kind of the first level is objects, right?
A microphone, a cup, a chair.
These are discrete things in the world.
And a lot of the ImageNet style stuff that Fei-Fei worked on was about recognizing objects
in the world.
Then leveling up the next level of objects, I think, of the scenes.
Scenes are compositions of objects.
Now we've got this recording studio with a table and microphones and people and chairs
at some composition of objects.
But then we envision worlds as a step beyond scenes, right?
Scenes are kind of maybe individual things, but we want to break the boundaries, go outside the
door, step up from the table, walk out from the door, walk down the street, and see the
car's buzzing past and see the leaves on the tree.
is moving and be able to interact with those things.
Another thing that's really exciting is just to mention the word new media.
With this technology, the boundary between real world and virtual, imagine world, or augmented
world or predicted world, is all blurry.
The real world is 3D, right?
So in the digital world, you have to have a 3D representation to even blend with the real
world.
You cannot have a 2D.
You cannot have a 1D.
to be able to interface with the real 3D world in an effective way.
With this, it unlocks it.
So the use cases can be quite limitless because of this.
Right.
So the first use case that Justin was talking about would be like the generation of a virtual world for any number of use cases.
When that you're just alluding to, it would be more of an augmented reality, right?
Yes.
Just around the time World Lab was being formed, Vision Pro was released by Apple.
And they used the word spatial computing.
We're almost like they almost still are, but we're spatial intelligence.
So spatial computing needs spatial intelligence.
That's exactly right.
So we don't know what hardware form it will take.
It'll be goggles, glasses, contact lenses.
Contact lenses.
But that interface between the true real world and what you can do on top of it,
whether it's to help you to augment your capability to work on a piece of machine
and fix your car, even if you are not a trained mechanic, or to just be in a Pokemon.
Suddenly, this piece of technology is going to be the operating system, basically, for ARVR MixR.
In the limit, what does an AR device need to do?
It's this thing that's always on.
It's with you.
It's looking out into the world.
So it needs to understand the stuff that you're seeing and maybe help you out with tasks
in your daily life.
But I'm also really excited about this blend between virtual and physical that becomes
really critical. If you have the ability to understand what's around you in real time in perfect
3D, then it actually starts to deprecate large parts of the real world as well. Like right now,
how many differently sized screens do we all own for different use cases? Too many. Right? You've
got your phone. You've got your iPad. You've got your computer monitor. You've got your TV. You've got
your watch. These are all basically different sides screens because they need to present information
to you in different contexts and in different positions. But if you've got the ability to seamlessly blend
virtual content with the physical world, it kind of deprecates the need for all of those.
It just ideally seamlessly blends information that you need to know in the moment with the right mechanism of giving you that information.
Another huge case of being able to blend the digital virtual world with the 3D physical world is for any agents to be able to do things in the physical world.
And if humans use this mix our devices to do things, like I said, I don't know how to fix a car, but if I have to, I put on this gogg or glass and suddenly,
am guided to do that. But there are other types of agents, namely robots, any kind of robots,
not just humanoid. And their interface, by definition, is the 3D world. But their compute,
their brain, by definition, is the digital world. So what connects that from the learning to
behaving between a robot brain to the real world brain? It has to be spatial intelligence.
So you've talked about virtual worlds, you've talked about kind of more of an augmented reality,
and now you've just talked about the purely physical world, basically, which would be used for robotics.
For any company, that would be like a very large charter, especially if you're going to get into.
How do you think about the idea of like deep, deep tech versus any of these specific application areas?
We see ourselves as a deep tech company, as the platform company that provides models that can serve different use cases.
Of these three, is there anyone that you think is kind of more natural early on that people can kind of expect the company to lean into?
I think it suffices to say the devices are not totally ready.
Actually, I got my first VR headset in grad school.
That's one of these transformative technology experiences.
You put it on, you're like, oh, my God, like this is crazy.
And I think a lot of people have that experience the first time they use VR.
So I've been excited about this space for a long time.
And I love the Vision Pro.
Like, I stayed up late to order one of the first ones, like the first day it came out.
But I think the reality is it's just not there yet as a platform for mass market appeal.
So very likely as a company will move into a market that's more ready then.
But, you know, we are a deep tech company.
Then I think there can sometimes be simplicity and generality, right?
We have this notion of being a deep tech company.
We believe that there is some underlying fundamental problems that need to be solved really well.
And if solved really well, can apply to a lot of different domains.
We really view this long arc of a company as building and realizing the dreams of spatial intelligence.
writ large. So this is a lot of technologies to build, it seems to me. Yeah, I think it's a really
hard problem. I think sometimes from people who are not directly in the AI space, they just see it as
AI as one undifferentiated mass of talent. And for those of us who have been here for longer, you realize
that there's a lot of different kinds of talent that need to come together to build anything in AI,
in particular this one. We've talked a little bit about the data problem. We've talked a little bit about
some of the algorithms that I worked on during my PhD, but there's a lot of other stuff we need
to do this too. You need really high quality, large-scale engineering. You need really really
deep understanding of the 3D world, there's actually a lot of connections with computer
graphics because they've been kind of attacking a lot of the same problems from the opposite
direction. So when we think about team construction, we think about how do we find like absolute
top of the world best experts in the world at each of these different subdomains that are
necessary to build this really hard thing. When I thought about how we form the best founding team
for World Labs, it has to start with a group of phenomenal multidisciplinary founders.
And of course, Justin is natural for me.
We just don't cover your years as one of my best students
and one of the smartest technologists.
But there are two other people I have known by reputation
and one of them Justin Invo worked with that I was drooling for, right?
One is Ben Mildenhall.
We talked about his seminal work in Nerve.
But another person is Christoph Lasner,
who has been reputed in the community of,
computer graphics, and especially he had the foresight of working on a precursor of the Gaussian
splat representation for 3D modeling five years, right, before the Gaussian splat take off.
Ben and Christoph are legends and maybe just quickly talk about kind of like how you've
thought about the build out of the rest of the team, because again, like, there's a lot to
build here and a lot to work on, not just in kind of AI or graphics, but like systems and so
forth. Yeah, this is what so far I'm personally most proud of is the formidable team. I've had the
privilege of working with the smartest young people in my entire career, right? From the top
of universities, being a professor at Stanford, but the kind of talent that we put together
here at Oral Labs is just phenomenal. I've never seen the concentration. And I think the biggest
differentiating element here is that we're believers.
of spatial intelligence, all of the multidisciplinary talents, whether it's system engineering,
machine learning, infra, to generative modeling, to data, to graphics, all of us, whether
it's our personal research journey or technology journey or even personal hobby, and that's how
we really found our founding team.
And that focus of energy and talent is humbling to me.
I just love it.
So I know you're being guided by a North Star.
So something about North Stars is like you can't actually reach them because they're in the sky,
but it's a great way to have guided.
So how will you know when you've accomplished what you've set out to accomplish?
Or is this a lifelong thing that's going to continue kind of infinitely?
First of all, there's real North Stars and virtual North Stars.
Sometimes you can reach virtual North Star.
Fair enough.
In the world model.
Exactly.
Like I said, the way I thought one of my North Star that,
It would take a hundred years with storytelling of images, and Justin and Andre, in my opinion, solved it for me.
So we could get to our North Star.
But I think for me is when so many people and so many businesses are using our models to unlock their needs for spatial intelligence.
And that's the moment I know we have reached a major milestone.
Actual deployment, actual impact.
Yeah, I don't think we're ever going to get there.
I think that this is such a fundamental thing.
The universe is a giant evolving four-dimensional structure.
And spatial intelligence writ large is just understanding that in all of its depths
and figuring out all the applications to that.
So I think we have a particular set of ideas in mind today,
but I think this journey is going to take us places that we can't even imagine right now.
The magic of good technology is that technology opens up more possibilities and unknown.
So we will be pushing and then the possibilities will be expanding.
Brilliant. Thank you, Justin. Thank you, Faye Faye.
This is fantastic.
Thank you, Martin.
Thank you, Martin.
All right, that is all for today.
If you did make it this far, first of all, thank you.
We put a lot of thought into each of these episodes,
whether it's guests, the calendar Tetris,
the cycles with our amazing editor Tommy until the music is just right.
So if you'd like what we put together,
consider dropping us a line at rate this podcast.com slash A16Z.
And let us know what your favorite episode is.
It'll make my day, and I'm sure Tommy's too.
We'll catch you on the flip side.
side.