Big Technology Podcast - Why Can't AI Make Its Own Discoveries? — With Yann LeCun
Episode Date: March 19, 2025Yann LeCun is the chief AI scientist at Meta. He joins Big Technology Podcast to discuss the strengths and limitations of current AI models, weighing in on why they've been unable to invent new things... despite possessing almost all the world's written knowledge. LeCun digs deep into AI science, explaining why AI systems must build an abstract knowledge of the way the world operates to truly advance. We also cover whether AI research will hit a wall, whether investors in AI will be disappointed, and the value of open source after DeepSeek. Tune in for a fascinating conversation with one of the world's leading AI pioneers. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Why has generative AI ingested all the world's knowledge, but not been able to come up
with scientific discoveries of its own? And is it finally starting to understand the physical
world? We'll discuss it with Meta Chief AI Scientist and Turing Award winner, Jan Lacoon.
Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world
and beyond. I'm Alex Cantorowitz, and I am thrilled to welcome Jan Lacoon, the Chief AI
scientist, Turing Award winner, and a man known as the godfather of AI to Big Technology
podcast.
Jan, great to see you again.
Welcome to the show.
Pleasure to be here.
Let's start with a question about scientific discovery and why AI has not been able to come
up with it until this point.
This is coming from Dwarkesh Patel.
He asked it a couple months ago.
Why do you make of the fact that AI's, gendered of AI, basically have the entire corpus
of human knowledge memorized and they haven't been able to make a single new connection that
has led to discovery?
Whereas if even a moderately intelligent person had this much stuff memorized, they would notice, oh, this thing causes this symptom, this other thing causes this symptom, there might be a medical cure here.
So shouldn't we be expecting that type of stuff from AI?
Well, from AI, yes, from large language models. No. You know, there's several types of AI architectures, right? And all of a sudden, when we talk about AI, we imagine chatbots.
Chatbots, L&Ms, are trained on an enormous amount of knowledge, which is purely text,
and they're trained to basically regurgitate, to retrieve, to essentially produce answers
that are conform to the statistics of whatever text they've been trained on.
And it's amazing what you can do with them.
It's very useful.
There's no question about it.
We also know that they can hallucinate facts that aren't true, but they're really, in their
purest form, they are incapable of inventing new things.
throw out this perspective that Tom Wolfe from Hugging Face shared on LinkedIn over the past week.
I know you were involved in the discussion about it. It's very interesting. He says to create an Einstein
in a data center, we don't just need a system that knows all the answers, but rather one that can
ask questions nobody else has thought or dared to ask. One that writes, what if everyone is
wrong about this when all textbooks, experts and common knowledge suggest otherwise?
Is it possible to teach LLM to do that?
No.
No, not in the current form.
I mean, and whatever form of AI would be able to do that will not be LLMs.
They might use LLM as one component.
LLMs are useful to turn, you know, to produce text, okay?
So we might, in the future AI systems, we might use them to turn abstract thoughts into language.
in the human brain
that's done by a tiny little brain area
right here called the Brockia area
it's about this big
that's our LLM
okay
but we don't think
in language
we think in
you know mental
representations of a situation
we have mental models
of everything we think about
we can think even if we can speak
and that takes place here
that's like you know
where real intelligence is
and that's the part that we haven't
reproduced certainly with LLM
so the question is
you know, are we going to have eventually AI architectures, AI systems that are capable of not just answering questions that are already there, but solving, giving new solutions to problems that we specify?
The answer is yes, eventually, not with current tele-hams.
And then the next question is, are they going to be able to ask their own questions?
like figure out what are the good questions to answer.
And the answer is eventually, yes, but that's going to take a while before we get machines
that are capable of this.
Like, you know, in humans, we have all the characteristics.
We have people who are, who have extremely good memory, they can, you know, retrieve a lot
of things, they have a lot of accumulated knowledge.
We have people who are problem solvers, right?
You give them a problem, they'll solve it.
And I think Tomah was actually talking about this kind of stuff.
said, like, you know, if you're good at school, you're a good problem solver. We give you a problem,
you can solve it. And you score well in math or physics or whatever it is. But then in research,
the most difficult thing is to actually ask the good questions. What are the important questions?
It's not just solving the problem. It's also asking the right questions, kind of framing a problem,
you know, in the right way. So you have kind of
new insight. And then after that comes, okay, I need to turn this into equations or into something,
you know, practical, a model. And that may be a different scale from the one that asked the
right questions. It might be a different skill also to solve equations. The people who write the
equations are not necessarily the people who write them, who solve them. And other people who
remember that there is, you know, some textbook from 100 years ago where
similar equations were sold, right?
Those are three different scales.
So LLMs are really good at retrieval.
They're not good at solving new problems, you know, finding new solutions to new problems.
They can retrieve existing solutions.
And they're certainly not good at all at asking the right questions.
And for those tuning in and learning about this for the first time, LLMs is the technology behind things like the GPT model that's baked within chat GPT.
But let me ask you this, Jan.
So the AI field does seem to have moved from standard LLMs.
two ellms that can reason and go step by step.
And I'm curious, can you program this sort of counterintuitive
or this heretical thinking by imbueing a reasoning model
with an instruction to question its directives?
Well, so we have to figure out what reasoning really mean.
Okay, and there are, you know, obviously everyone is trying to get LLMs
to reason to some extent
to perhaps be able to check
whether the answer they produce are correct.
And
the way
people are approaching the problem at the moment
is that they
basically are trying to do this
by modifying the current paradigm
without completely changing it.
So can you
bolt a couple warts on top of LLM
so that you kind of
have some primitive
reasoning function.
And that's essentially what a lot of the reasoning systems are doing.
You know, one simple way of getting addatems to kind of appear to reason is chain of thought, right?
So you basically tell them to generate more tokens that they really need to.
In the hope that in the process of generating those tokens, they're going to devote more computation
to answering your question.
And to some extent that works, surprisingly, but it's very limited.
you don't actually get real reasoning out of this.
Reasoning, at least in classical AI in many domains,
involves a search through a space of potential solutions.
Okay, so you have a problem to solve,
you can characterize whether the problem is solved or not.
So you have some way of telling whether the problem is solved.
And then you search through a space of solutions
for when that actually satisfies the constraints
or is identified as being a solution.
as being a solution.
And that, you know, that's how, that's kind of the most general form of reasoning you can
imagine.
There is no mechanism at all in LLMs for this search mechanism.
What you have is you have to kind of bolt this on top of it, right?
So one way to do this is you get an LLM to produce lots and lots and lots of sequences of answers,
right, sequences of tokens, which, you know, represent answers.
And then you have a separate system that picks which one is good.
good.
Okay?
This is a bit like writing a program by sort of randomly more or less generating instructions, you
know, while maybe respecting the grammar of the language.
And then checking all of those programs for one that actually works.
It's not a good way, not a very efficient way of producing correct pieces of code.
It's not a good way of reasoning either.
So a big issue there is that when humans
or animals reason.
We don't do it in token space.
In other words, when we reason,
we don't have to generate a text
that expresses our solution
and then generate another one
and then generate another one
and then among all the ones
we produce, pick the one that is good.
We reason internally, right?
We have a mental model of the situation
and we manipulate it in our head
and we find kind of a good solution
when we plan a sequence of actions
to, I don't know, build a table,
or something. We plan the sequence of action. You know, we have a mental model of that in your head.
If I tell you, and this has nothing to do with language. Okay, so if I tell you, imagine a cube
floating in front of us right now. Now, rotate that cube 90 degrees along a vertical axis.
Okay, you can imagine this thing taking place. And you can readily observe that it's a cube.
If I rotate it 90 degrees, it's going to look just like the cube that I still.
started with, okay? Because you have this metal model of a cube. And that reasoning is in some
abstract continuous space. It's not in text. It's not related to language or anything like
that. And humans do this all the time. Animals do this all the time. And this is what we
yet cannot reproduce with machines. Yeah, it reminds me you're talking through
chain of thought and how it doesn't produce much novel insights. And
And when DeepSeek came out, one of the big screenshots that was going around was someone asking DeepSeek for a novel insight on the human condition.
And as you read it, it's another one of these very clever tricks, the AI pulls, because it does seem like it's running through all these different, like very interesting observations about humans, that we take our hate, like our violent side and we channel it towards cooperation instead of competition, and that helps us build more.
And then you're like, as you read the chain of thought, you're like, this is kind of just like you read sapiens and maybe.
some other books, and that's your chain of thought.
Pretty much, yeah.
A lot of it is big agitation.
I'm now going to move a part of the conversation I had later closer up, which is the wall.
Effectively is training standard large language models coming close to hitting a wall,
whereas before there was somewhat predictable returns if you put a number, a certain amount
of data and a certain amount of compute towards training these models.
You can make them predict.
you believe better.
As we're talking, it seems to me like you believe that that is eventually not going to be true.
Well, I don't know if I would call it a wall, but it's certainly diminishing return in the sense
that, you know, we've kind of run out of natural text data to train those LLMs, where they're
already trained with, you know, on the order of, you know, 10 to the 13 or 10 to the 14 tokens.
That's a lot.
That's a lot.
It's like the whole internet.
That's all the publicly available internet.
And then, you know, some companies license content that is not publicly available.
And then there is talks about, like, you know, generating artificial data and then hiring
thousands of people to kind of, you know, generate more data.
By their knowledge, PhDs and professors.
Yeah, but in fact, it could be even simpler than this because most of the systems actually
don't understand basic logic, for example, right?
So to some extent, you know, that's going to be.
slow progress along those lines with synthetic data, with, you know, hiring more people to, you know,
plug the holes in the sort of, you know, knowledge background of those systems.
But it's diminishing return, right?
The cost are ballooning of generating that data, and the returns are not that great.
So we need a new paradigm.
Okay, we need a new kind of architecture of systems that at the, you know, at the core are capable of those search and, you know, searching for a good solution, checking whether that solution is good, planning for a sequence of actions to arrive at a particular goal, which is what you would need for an agentic system to really work.
Everybody is talking about agentic system. Nobody has any idea how to build them other than basically regurgitating plans that have the system has.
already been trained on. Okay. So, you know, it's like everything in computer science. You can,
you can engineer a solution, which is limited for, in the context of AI. You can make a system that
is, you know, based on learning or retrieval with enormous amounts of data. But really, the complex
thing is how you build a system that can solve new problems without being trained to solve those
problems. We are capable of doing this. Animals are capable of doing this. Facing a new situation.
We can either solve it zero shot without training ourselves to handle that situation, just the
first time we encounter it. Or we can learn to solve it extremely quickly. So, for example,
you know, we can learn to drive in, you know, a couple dozen hours of practice.
And to the point that after 20, 30 hours, it becomes kind of second nature, where this becomes kind of subconscious.
We don't even think about it.
You don't even think about it.
Speaking of system one, system two, right?
That's right.
So, you know, recalls the discussion we had with Danny Kahneman a few years ago.
So, you know, the first time you drive, your system two is all present.
You have to use it.
You imagine all kind of catastrophe scenarios and stuff like that, right?
your full attention is devoted to driving.
But then after a number of hours, you know, you can talk to someone at the same time.
Like, you don't need to think about it.
It's become sort of subconscious and more or less automatic.
It's become system one.
And pretty much every task that we, you know, learn that we accomplish the first time,
we have to use the full power of our minds.
And then eventually if we repeat them sufficiently many times,
they get kind of subconscious.
I have this vivid memory of once being in a workshop where one of the participants was a chess grandmaster
and he played a simultaneous game against like 50 of us, right, you know, going from one person to another.
You know, I got wiped out in 10 turns or something.
I'm terrible at chess, right?
But so he would come, you know, come to my table.
You know, I had time to think about this because, you know, he was playing the other 50 tables or something.
So I make my move in front of it.
He goes like, what?
And then immediately play.
So he doesn't have to think about it.
I was not a challenging enough opponent that he had to actually call his system two.
His system one was sufficient to beat me.
And what that tells you is that when you become familiar with the task and you train yourself, you know, it kind of becomes subconscious.
But the essential ability of humans and many animals is that when you face a new situation,
you can think about it, figure out a sequence of actions, a course of action, to accomplish a goal.
And you don't need to know much about the situation other than your common knowledge of how the world works, basically.
That's what we're missing, okay, with the eye systems.
Okay, now I really have to blow up the order here because you've said some very interesting things that we have to talk about.
You talked about how basically LLMs have hit the point of diminishing returns, large
language models, the things that have gotten us here, and we need a new paradigm.
But it also seems to me that that new paradigm isn't here yet.
And I know you're working on the research for it, and we're going to talk about that,
what the next new paradigm might be.
But there's a real timeline issue, don't you think?
Because I'm just thinking about the money that's been raised and put into this.
Last year, $6.6 billion to Open AI.
last week or a couple weeks ago, another three and a half billion to Anthropic after they raised
four billion last year. Elon Musk is putting another, you know, another small fortune into
building GROC. These are all LLM first companies. They're not searching out the neck. I mean,
maybe Open AI is, but that $6.6 billion that they got was because of ChachyPT.
So where's this field going to go? Because if that money is being invested into something,
that is at the point of diminishing returns requiring a new paradigm to progress,
that sounds like a real problem.
Well, I mean, we have some ideas about what this paradigm is.
The difficulty that, I mean, what we're working on is trying to make it work.
And it's, you know, it's not simple that they take years.
And so the question is, is the, the, all the capabilities we're talking about,
perhaps through this new paradigm that we're thinking of, that we're working on,
is it going to come quickly enough to justify all of this investment?
And if it doesn't come quickly enough, is the investment still justified?
Okay, so the first thing you can say is we are not going to get to human-level AI by just getting up at an lens.
This is just not going to happen, okay?
That's your perspective.
There's no way, okay, absolutely no way.
and whatever you can hear from some of my more adventurous colleagues
is not going to happen within the next two years.
There's absolutely no way in hell to, you know, pardon my French.
You know, the idea that we're going to have, you know,
a country of genius in the data center.
That's complete BS, right?
There's absolutely no way.
What we're going to have maybe is systems that are trained on sufficiently large amounts of data
that any question that any reasonable person may ask,
will find an answer through the systems.
And it would feel like you have, you know, a PhD sitting next to you.
But it's not a PhD you have next to you.
It's, you know, a system with a gigantic memory and retrieval ability,
not a system that can invent solutions to new problems,
which is really what a PhD is.
Okay, this is actually connected.
It's, you know, connected to this post that Toma Wolf made that
inventing new things requires
the type of skill and abilities
that you're not going to get from NLMs.
So this big question, which is
the investment that is being done now
is not done for tomorrow. It's done for the next few years.
And most of the investment, at least from the meta side, is investment in infrastructure for inference.
Okay, so let's imagine that by the end of the year, which is really the planet meta, we have 1 billion users of meta AI through smart glasses, you know, a standalone app and whatever.
You're going to serve those people, and that's a lot of computation.
So that's why you need, you know, a lot of investment in infrastructure to be able to scale this up and, you know, build it up.
over months or years.
And so that, you know, that's where most of the money is going,
at least on, you know, on the side of companies like Meta and Microsoft and Google and
and essentially Amazon.
Then there is, so this is just operations, essentially.
Now, is there going to be the market for, you know, 1 billion people,
using those things regularly, even if there is no change of paradigm. And the answer is probably
yes. So, you know, even if the revolution of a new paradigm doesn't come, you know, within three
years, this infrastructure is going to be used. There's a very little question about that. Okay,
so it's a good investment. And it takes so long to set up, you know, data centers and all that
stuff that you need to get started now and plan for, you know, progress to be continuous so that,
you know, eventually the investment is justified.
But you can't afford not to do it, right?
Because there would be too much of a risk to take if you have the cash.
But let's go back to what you said.
The stuff today is still deeply flawed.
And there have been questions about whether it's going to be used.
Now, meta is making this consumer bet, right?
The consumers want to use the AI.
That makes sense.
Open AI has 400 million users of chat TPT.
Meta has 3, 4 billion.
I mean, basically, if you have a phone.
We have three point something billion users, 600 million users of meta AI.
Right. Okay. So more than chat GPT.
Yeah, but it's not used as much as chat.
So the users are not as intense, if you want.
But basically the idea that that Medica can get to a billion consumer users, that seems reasonable.
But the thing is, a lot of this investment has been made with the idea that this will be useful to enterprises, not just a consumer app.
And there's a problem because, like we've been talking about, it's not good enough yet.
You look at deep research.
This is something Benedict Devons has brought up.
Deep research is pretty good, but it might only get you 95% of the way there, and maybe 5% of it hallucinates.
So if you have a 100-page research report and 5% of it is wrong and you don't know what 5%, that's a problem.
And similarly, in enterprises today, every enterprise is trying to figure out how to make
AI useful to them, generative AI useful to them and other types of AI, but only 10% or 20% maybe
of proof of concepts make it out the door into production because it's either too expensive
or it's fallible. So if we are getting to the top here, what do you anticipate is going to
happen with everything that has been pushed in the anticipation that it is going to get even better
from here?
Well, so again, it's a question of the timeline, right?
When are those systems going to become sufficiently reliable and intelligent
so that the deployment is made easier?
But, you know, I mean, the situation you're describing that, you know, beyond the impressive
demos, actually deploying systems that are reliable is where things tend to falter in the
use of computers and technologies
and particularly AI, this is not
you. It's basically
you know
why we
had super impressive
you know, autonomous
driving demos 10 years ago
but we still don't have level 5
start driving cars, right?
It's the last mile that's really
difficult
so to speak for cars.
You know, it's
you know, the last...
There's a few, that was not deliberate.
You know, the last few percent of reliability,
which makes a system practical
and how you integrate it with sort of existing systems
and blah, blah, blah,
and how it makes users of it more efficient
if you want or more reliable or whatever.
That's where it's difficult.
And, you know, this is why,
If we go back several years and we look what happened with IBM Watson.
So Watson was going to be the thing that, you know, IBM was going to push and generate tons of revenue by having Watson, you know, learn about medicine and then be deployed in every hospital.
And it was basically a complete failure and was sold for parts, right?
and cost a lot of money to IBM, including the CEO.
And what happens is that actually deploying those systems
in situations where they are reliable and actually help people
and don't hurt the natural conservatism of the labor force,
this is where things become complicated.
We're seeing the same, you know, the process we're seeing now
with the difficulty of deploying the system is not new.
It's happened absolutely at all times.
This is also why, you know, some of your listeners perhaps are too young to remember this,
but there was a big wave of interest in AI in the 1980s, early 1980s, around expert systems.
And, you know, the hottest job in the 1980s was going to be knowledge engineer,
and your job was going to be to sit next to an expert and then, you know,
turn the knowledge of the expert into rules and facts that,
would then be fed to a inference engine that would be able to kind of derive new facts
and answer questions and blah, blah, blah.
Big wave of interest, the Japanese government started a big program called Fifth Generation
Computer, the hardware was going to be designed to actually take care of that and blah, blah,
you know, mostly a failure.
There was kind of, you know, the wave of interest kind of died in the mid-90s about this.
And, you know, a few companies were successful, but basically for a narrow set of applications for which you could actually reduce human knowledge to a bunch of rules, and for which it was economically feasible to do so.
But the wide-ranging impact on all of society and industry was just not there.
And so that's the danger of AI all the time.
I mean, the signals are clear that, you know, still LLMs with all the bears and whistles
actually play an important role, if nothing else, for information retrieval.
You know, most companies want to have some sort of internal experts that know all the internal
documents so that any employee can ask any question.
We have one at META. It's called Metamate.
It's really cool. It's very useful.
Yeah, and I'm not suggesting that AI is going to, that modern AI is not, or
Modern gender AI is not useful.
I'm asking purely that there's been a lot of money that's been invested into expecting
this stuff to effectively achieve God-level capabilities.
And we both are talking about how there's potentially diminishing returns here.
And then what happens if there's that timeline mismatch, like you mentioned?
And this is the last question I'll ask about it because I feel like we have so much else to cover.
but I feel like timeline mismatches, that might be personal to you.
You and I first spoke nine years ago, which is crazy now, nine years ago,
and about how in the early days you had an idea for how AI should be structured,
and you couldn't even get a seat at the conferences.
And then eventually when the right amount of compute came around,
those ideas started working,
and then the entire AI field took off base of,
of your idea that you worked on with Benjio and Hinton.
But...
And a bunch of others.
And many others.
And for the sake of efficiency, we'll say go look it up.
But just talking about those mismatched timelines, when there have been overhyped moments in the AI field, maybe with expert systems that you were just talking about, and they don't pan out the way that people expect, the eye field goes into what's called AI winter.
Well, there's a backlash.
Yeah.
Correct.
And so if we're going to...
If we are potentially approaching this moment of mismatched timelines,
do you fear that there could be another winter now,
given the amount of investment,
given the fact that there's going to be potentially diminishing returns
with the main way of training these things,
and maybe we'll add in the fact that the market is the stock market looks like
it's going through a bit of a downturn right now.
Now, that's a variable, probably the third most important variable
of what we're talking about, but it has to factor.
So, yeah, I think, I mean, there's certainly a question of the timing there, but I think if we try to dig a little bit deeper, as I said before, if you think that we're going to get to human level AI by just training on more data and scanning up LLMs, you're making a mistake.
So if you're an investor and you invest in a company that told you, we're going to get to human level AI and PhD level by just, you know, training on more data with a few tricks.
I don't know if you're going to use your shirt,
but that was probably not a good idea.
However, there are ideas about how to go forward
and have systems that are capable of doing
what every intelligent animal and human are capable of doing
and that current AI systems are not capable of doing.
And I'm talking about understanding the physical world,
having persistent memory, and being able to reason and plan.
Those are the four characteristics that, you know, need to be there.
And that requires systems that, you know, can acquire common sense.
They can learn from natural sensors like video as opposed to just text, just human-produced data.
And that's a big challenge.
I mean, I've been talking about this for many years now and saying this is what the challenge is.
This is what we have to figure out.
And my group and I have or people working with me and others who are.
who have listened to me, are making progress along this line of a system that can be trained
to understand how the world works on video, for example, systems that can use mental
models of how the physical world works to plan sequences of actions to arrive at a particular
goal. So we have kind of early results of these kind of systems, and there are people
at deep mind working on similar things, and there are people in various universities working on
this. So the question is, you know, when is
is going to go from interesting research papers demonstrating a new capability with a new
architecture to architectures at scale that are practical for a lot of applications and can find
solutions to new problems without being trained to do it, etc. And it's not going to happen
within the next three years, but it may happen between three to five years, something like that.
And that's kind of corresponds to, you know, the sort of ramp up that we see in investment.
Now, whether other – so that's the first thing.
Now, the second thing that's important is that there's not going to be one secret magic bullet
that one company or one group of people is going to invent that is going to just solve the problem.
It's going to be a lot of different ideas, a lot of effort, some principles around,
on which to base this that some people may not subscribe to and will go in a direction that
is, you know, will turn out to be a dead end. So there's not going to be like a day
before which there is no AI and after which we have a GI. This is not going to be an event.
It's going to be continuous conceptual ideas that as time goes by are going to be made bigger
and to scale and going to work better
and it's not going to come from a single entity
is going to come from the entire research community
across the world. And the people who
share their research are going to move faster than
the ones that don't. And so
if you think that there is some
startup somewhere with five people
who has discovered the secret of a
GI and you should invest five billion in them,
you're making a huge mistake.
You know, Jan, first of all,
I always enjoy our conversations because
we start to get some real answers. And I remember
even from our last conversation, I was just
and always looking back to that conversation saying,
okay, this is what Jan says, this is what everybody else is saying.
I'm pretty sure that this is the grounding point,
and that's been correct.
And I know we're going to do that with this one as well.
And now you've set me up for two interesting threads
that we're going to pull out as we go on with our conversation.
First is the understanding of physics and the real world,
and the second is open source.
So we'll do that when we come back right after this.
Hey, everyone.
Let me tell you about the Hustle Daily Show,
a podcast filled with business,
tech news and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email
for its irreverent and informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show,
where their team of writers break down the biggest business headlines
in 15 minutes or less and explain why you should care about them.
So, search for The Hustle Daily Show and your favorite podcast app,
like the one you're using right now.
And we're back here with Jan Lecun.
He is the chief AI scientist.
at Meta and the Turing Award winner
that we're thrilled to have on our show,
luckily for the third time.
I want to talk to you about physics, Jan,
because there's sort of this famous moment
in big technology podcast history,
and I say famous with our listeners.
I don't know if it really extended beyond,
but you had me write to chat GPT,
if I hold a paper horizontally with both hands
and let go over the paper with my left hand,
what will happen.
And I write it, and it convincingly says,
like it writes, the physics will happen and the paper will float towards your left hand.
And I read it out loud, convinced, and you're like, that thing just hallucinated and you believed it.
That is what happened.
So, listen, it's been two years.
I put the test to ChachyPT today.
It says, when you let go of the paper with your left hand, gravity will cause the left side of the paper to drop while the right side still held up by your right hand remains in place.
This creates a pivot effect where the paper rotates around the point where you,
your right hand is holding it.
So now it gets it right.
It learned the lesson.
You know, it's quite possible that this, you know,
someone hired by Open AI to solve the problem was fed that question and sort of fed the answer and the system was fine-tuned with the answer.
I mean, you know, obviously you can imagine an infinite number of such questions.
And this is where, you know, the so-called post-training of LLM becomes expensive.
which is that, you know, how much coverage of all those style of questions do you have to do
for the system to basically cover 90% of, or 95% or whatever percentage of all the questions that people may ask it?
But, you know, there is a long tail, and there is no way you can train the system to answer all possible questions
because there is an essentially infinite number of them.
And there is way more questions the system cannot answer than questions it can answer.
You cannot cover the set of all possible training questions in the training set.
Right.
So because I think our conversation last time was saying you said that because these actions of like what's happening with the paper,
if you let go of it with your hand, has not been covered widely in text, the model won't really know how to handle it.
Because unless it's been covered in text, the model won't have that understanding, won't have that inherent understanding of the real world.
And I've kind of gone with that for a while.
then I said, you know what, let's try to generate some AI videos.
And one of the interesting things that I've seen with the AI videos is there is some
understanding of how the physical world works there in a way that in our first meeting nine
years ago, you said one of the hardest things to do is you ask an AI, what happens if you
hold a pen vertically on a table and let go?
Will it fall?
And there's like an unbelievable amount of permutations that can occur.
And it's very, very difficult for that.
AI to figure that out because it just doesn't inherently understand physics.
But now you go to something like SORA and you say, show me a video of a man sitting on a chair
kicking his legs. And you can get that video. And the person sits on the chair and they kick their
legs and the legs, you know, don't fall out of their sockets or stuff. They bend at the joints.
And they don't have three legs. And they don't have three legs. So wouldn't that suggest an improvement
of the capabilities here with these large models?
No.
Why?
Because you still have those videos produced by those video generation system
where you spill a glass of wine
and the wine like floats in the air or like flies off or disappears or whatever.
So, of course, for every specific situation,
you can always collect more data for that situation
and then train your model to handle it.
But that's not really understanding the underlying reality.
This is just, you know, compensating the lack of understanding by increasingly large amounts of data.
You know, children understand, you know, simple concepts like gravity
with a surprisingly small amount of data.
So, in fact, there is an interesting calculation you can do,
which I've talked about publicly before.
But if you take an LLM, typical LLM train on 30 trillion tokens, something like that, right?
310 to the 13 tokens.
The token is about 3 bytes.
So that's 0.9, 10 to the 14 tokens.
Let's say 10 to the 14 tokens to round this up.
That text would take any of us probably on the order of 400,000 years to read.
No problem.
at 12 hours a day.
Now, a 4-year-old has been awake a total of 16,000 hours.
You can multiply by 3,600 to give a number of seconds.
And then you can put a number on how much data has gotten to your visual cortex
through the optic nerve.
Optic nerve, each optic nerve, we have two of them, carries about one megabyte per second, roughly.
So it's 2 megabytes per second times 3,600 times 16,000.
And that's just about 10 to the 14 bytes.
Okay, so in four years, a child has seen through vision, or touch, for that matter, as much data as the biggest data lamps.
And it tells you clearly that we're not going to get to human level data by just training on text.
It's just not a rich enough source of information.
And by the way, 16,000 hours is not that much video.
It's 30 minutes of YouTube uploads.
Okay.
We can get that pretty easily.
Now, in nine months, Baby has seen, you know, let's say 10 to the 13 bytes or something, which is not much again.
And in that time, Baby has learned basically all of intuitive physics that we know about.
You know, conservation of momentum, gravity, conservation of momentum, the fact that object don't spontaneously disappear,
the fact that it still exists even if you hide them.
I mean, there's all kinds of stuff, you know,
very basic stuff that we learn about the world
in the first few months of life.
And this is what we need to reproduce with machine.
This type of learning of, you know, figuring out
what is possible and impossible in the world,
what we result from an action you take
so that you can plan a sequence of actions to arrive at a particular goal.
That's the idea of one model.
And now connected with the question about video generation systems,
is the right way to approach this problem to train better and better video generation systems.
And my answer to this is absolutely no.
The problem of understanding the world does not go through the solution to generating video at the pixel level.
I don't need to know
if I take this
glass of
this cup of
water and I spill it
I cannot entirely predict
the exact path
that the water will
follow on the table
and what shape it's going to take and all that stuff
with noise it's going to make
but at a certain level
of abstraction I can make a prediction that the water will spill
and it will probably
make my phone wet and everything.
So I can't predict all the details, but I can predict at some level of abstraction.
And I think that's really a critical concept, the fact that if you want a system to be able
to learn to comprehend the world and understand how the world works, it needs to be able to
learn an abstract representation of the world that allows it to make those predictions.
what that means is that those architectures will not be generative.
Right.
And I want to get to your solution here in a moment,
but I just wanted to also, like,
what would a conversation between us be without a demo?
So I want to just show you,
I'm going to put this on the screen when we do the video,
but this is a video I was pretty proud of.
I got this guy sitting on a chair kicking his legs out
and the legs stay attached to his body.
And I was like, all right, this stuff is making real progress.
And then I said, can I get a car going into a haystack?
And so it's two bales of haystacks, and then a haystack magically emerges from the hood of a car that's stationary.
And I just said to myself, okay, Jan wins again.
It's a nice car, though.
Yeah.
I mean, the thing is, those systems have been fine-tuned with a huge amount of data for humans because, you know, that's what people are asking.
Most videos that they ask to do.
So there is a lot of data of humans doing various things to train those.
those systems. So that's why it works for humans, but not for a situation that the people
training that system had not anticipated. So you said that the model can't be generative to be able
to understand the real world. That's right. You are working on something called V JEPA.
JEPA. JEPA. JEPA. Right. V is the video. You also have I JEPA for images.
Right. That is... We have Jepas for all kinds of stuff. Text also.
And text. So explain how that will solve the problem of being able to allow a machine
to abstractly represent what is going on in the real world.
Okay.
So what has made the success of AI and particularly natural language understanding and chat bot
in the last few years, but also to some extent, computer vision is self-supervised learning.
So what is self-supervised learning?
It's take an input, be it an image, a video, a piece of text, whatever, corrupt it in some way.
and train a big neural net to reconstruct it.
Basically, recover the uncorrupted version of it
or the undistorted version of it
or a transformed version of it
that would result from taking an action.
Okay.
And, you know, that would mean, for example,
in the context of text, take a piece of text,
remove some other words,
and then train some big neural net
to particular words that are missing.
Take an image, remove some pieces of it,
and then train big neural net
to recover the full image.
Take a video, remove a piece of it, train in all that
to predict what's missing.
So LLMs are a special case of this where you take a text
and you train the system to just reproduce the text.
And you don't need to corrupt the text because the system is designed
in such a way that to predict one particular word or token in the text,
it can only look at the tokens that are to the left of it.
Okay, so in effect, the system
has had wired into its architecture the fact that it cannot look at the present in the future
to predict the present. It can only look at the past. Okay. So, but basically you train
that system to just reproduce its input on its output. Okay? So this kind of architecture
is called a causal architecture. And this is what an LLM is a large language model. That's
what, you know, all the chatbots in the world are based on. Take a piece of text and train
that system to just reproduce that piece of text on its output. And to predict a piece of
particular word, it can only look at the word to the left of it.
And so now what you have is a system that given a piece of text can predict the word
that follows that text.
And you can take that word that is predicted, shift it into the input, and then predict the
second word.
Shift that into the input, predict the third word.
That's called auto-requessive prediction.
It's not a new concept, very old.
So, you know, cell supervised learning does not train to do
a particular, does not train a system to accomplish a particular task other than capture the
internal structure of the data. It doesn't require any labeling by a human. Okay? So apply
these to images. Take an image, mask a chunk of it, like a bunch of patches from it, if you
want, and then train a beginner on that to reconstruct what is missing. And now use the internal
representation of the image learned by the system as input to a subsequent downstream task
for, I don't know, image recognition, segmentation, whatever it is.
It works to some extent, but not great.
So there was a big project like this to do this at FAIR.
It's called MAE, masked autoencoder.
It's a special case of denoising autoencoder, which itself is, you know, the sort of general
framework from which I derive this idea of self-supervile learning.
So it doesn't work so well.
And there's various ways to, you know, if you apply this to video also, I've been working
on this for almost 20 years now.
Take a video, show just a piece of the video, and then train the system to predict what's
going to happen next in the video.
So the same idea as for text, but just for video.
And that doesn't work very well either.
And the reason it doesn't work, why does it work for text and not for video, for example?
And the answer is it's easy to predict a word that comes after a text.
You cannot exactly predict which word follows a particular text, but you can produce something
like a probability distribution over all the possible words in a dictionary, all the possible
tokens.
There's only about 100,000 possible tokens.
So you just produce a big vector with 100,000 different numbers that are positive and sum up to one.
Okay. Now, what are you going to do to represent a probability distribution over all possible
frames in a video or all possible missing parts of an image? We don't know how to do this properly.
In fact, it's mathematically intractable to represent distributions in high dimensional continuous
spaces. Okay? We don't know how to do this in a kind of useful way, if you want.
And so, and I've tried to, you know, do this for video for a long time.
And so that is the reason why those idea of self-supervised learning using generative models
have failed so far.
And this is why, you know, using, you know, trying to train a video generation system as
a way to understand, to get a system to understand how the work works.
That's why it's, it can't succeed.
So what's the alternative?
The alternative is something.
that is not a generative architecture, which we call JEPA, so that means joint embedding predictive
architecture. And we know this works much better than attempting to reconstruct. So we've had
experimental results on learning good representations of images going back many years, where instead of
taking an image, corrupting it, and attempting to reconstruct this image, we take the original full
image and the corrupted version, we run them both through neural nets.
Those neural nets produce representations of those two images, the initial one and the
corrupted one.
And we train another neural net, a predictor, to predict the representation of the full image
from the representation of the corrupted one.
Okay.
And if you train a system, if you're successfully train a system of this type, this is not
trained to reconstruct anything. It's just trying to learn a representation so that you can make
prediction within the representation layer. And you have to make sure that the representation contains
as much information as possible about the input, which is where it's difficult, actually.
That's the difficult part of training those systems. So that's called a JEPA, joint embedding
predictive architecture. And to train a system to learn good representations of images, those
joint embedding architectures work much better than the ones that are generative that are trained.
by reconstruction.
And now we have a version that works for video too.
So we take a video, we corrupt it by masking a big chunk of it.
We run the full video and the corrupted one through encoders that are identical.
And simultaneously, we train a predictor to predict the representation of the full video
from the partial one.
And the representation that the system learns of videos, when you feed it to a system that
you train to tell you, for example, what action is taking place in the video or whether the video is possible or impossible or things like that, it actually works quite well.
That's cool. So it gives that abstract thinking in a way.
Right. And we have experimental results that shows that this joint embedding training, we have several methods for doing this.
There's one that's called Dino, another one that's called VCRAG, another one that's called IJPA, which is sort of a distillation method.
And so, you know, several different ways to approach this.
But one of those is going to lead to a recipe that basically gives us a general way of training those J-PAR architectures.
Okay, so it's not generated because the system is not trained to regenerate the part of the input.
It's trying to generate a representation, an abstract representation of the input.
And what that allows it to do is to ignore all the details about the input that are really not predictable.
Like, you know, the pen that you put on the table vertically and when you let it go,
you cannot predict in which direction is going to fall.
But at some abstract level, you can say that the pen is going to fall.
It's falling.
Right?
Without representing the direction.
So that's the idea of JEPA.
And we're starting to have, you know, good results on sort of having systems.
So the VJAPA system, for example, is trained on natural, lots of natural videos.
And then you can show it a video that's impossible.
Like a video where, for example, an object disappears, or changes shape, okay?
You can generate this with a game engine or something.
Or a situation where you have a ball rolling and it rolls and it stops behind a screen.
And then the screen comes down and the ball is not there anymore.
Right.
Okay.
So things like this.
And you measure the prediction error of the system.
So the system is trained to predict, right?
And not necessarily in time, but like basically to predict.
you know, the sort of coherence of the video.
And so you measure the prediction error as you show the video to the system.
And when something impossible occurs, the prediction error goes through the roof.
And so you can detect if the system has integrated some idea of what, you know,
is possible physically or what's not possible, but just being trained with physically possible natural videos.
So that's really interesting.
That's sort of the first hint that a system has acquired some level of.
common sense.
And we have versions of those systems also that are so-called action conditions.
So basically we have things where we have a chunk of video or an image of, you know,
the state of the world at time T.
And then an action is being taken like, you know, robot arm is being moved or whatever.
And then, of course, we can observe the result resulting from this action.
So now what we have, when we train a JEPA with this, the model basically,
can say, here is a state of the world at time T.
Here is an action you might take.
I can predict the state of the world at time T plus one
in this abstract representation space.
There's this learning of how the world works.
Of how the world works.
And the cool thing about this is that now you can imagine,
you can have the system imagine what would be the outcome
of a sequence of actions.
And if you give it a goal saying, like,
I want the world to look like this at the end,
can you figure out a sequence of actions
to get me to that point.
It can actually figure out by search for a sequence of actions.
that will actually produce that result.
That's planning. That's reasoning.
That's actual reasoning and actual planning.
Okay, Jan, I have to get you out here where we are over time,
but can you give me like 60 seconds your reaction to deep seek
and sort of has open source overtaken the proprietary models at this point
and we've got to limit to 60 seconds.
Otherwise, I'm going to get killed by your team here.
Overtaken is a strong word.
I think progress is faster in the open source world, that's for sure.
But of course, you know, the proprietary shops are profiting from the progress of the open source world, right?
They get access to that information like everybody else.
So what's clear is that there is many more interesting ideas coming out of the open source world
that any single shop, as big as it can be, cannot come up with.
You know, nobody has a monopoly on good ideas.
And so the magic efficiency of the open source world is that it will,
recruits talents from all over the world.
And so what we've seen with Deepseek is that
if you set up a small team
with a relatively long leash
and few constraints
on coming up with just the next generation
of LLMs,
they can actually come up with new ideas that
nobody else had come up with, right?
They can sort of reinvent a little bit
how you do things.
And then if they share that
with the rest of the world, then
the entire world progresses.
Okay. And so
it clearly shows that, you know, open source progress is faster and, you know, a lot more innovation
can take place in the open source world, which the proprietary world may have a hard time catching
up with, is cheaper to run.
What we see is for, you know, partners who we talk to, they say, well, our clients, when they
prototype something, they may use a proprietary API.
only comes time to actually deploy the product, they actually use Lama or other open source
engines because it's cheaper and it's more secure, you know, it's more controllable, you can run
it on premise, you know, there's all kinds of advantages. So we've seen also a big evolution
in the thinking of some people who are initially worried that open source efforts were going
to, I don't know, for example,
help the Chinese or something,
if you have some of geopolitical reason
to think it's a bad idea.
But what Deep Sik has shown is that the Chinese don't need us.
I mean, they can come up with really good ideas, right?
I mean, we all know that there are really, really good scientists in China.
And one thing that is not widely known
is that the single most cited paper in all of science
is a paper on deep learning from 10 years ago from 2015,
and it came out of Beijing.
Oh.
Okay.
The paper is called ResNet.
So it's a particular type of architecture of neural net where basically by default, every stage
in a deep learning system computes the identity function.
You just copy its input on its output.
What the neural net does is compute the deviation from this identity.
So that allows to train extremely deep neural net with dozens of layers, perhaps 100 layers.
And it was the first author of that paper, the gentleman called Kaming He, at the time he was
working at Microsoft Research, Beijing.
Soon thereafter, the publication of that paper, he joined Fair in California, so I hired him.
And worked at Fair for eight years or so.
And I recently left and is now a professor at MIT.
Okay.
So there are really, really good scientists everywhere in the world.
Nobody has a monopoly on good ideas.
Certainly, Silicon Valley does not have a monopoly on good ideas.
Or another example of that is actually the first llama came out of Paris.
It came out of the fair lives in Paris, a small team of 12 people.
So you have to take advantage of the diversity of ideas, backwards,
grounds creative juices of the entire world if you want science and technology to progress fast.
And that's enabled by open source.
Jan, it is always great to speak with you.
I appreciate this.
This is our, I think, fourth or fifth time speaking, again, going back nine years ago.
You always helped me see through all the hype and the buzz and actually figure out what's happening.
I'm sure that's going to be the case for our listeners and viewers as well.
So, Jan, thank you so much for coming on.
Hopefully we do it again soon.
Thank you, Alex.
All right, everybody.
Thank you for watching. We'll be back on Friday to break down the week's news. Until then, we'll see you next time on Big Technology Podcast.