StarTalk Radio - Mindreading with Jean Rémi-King
Episode Date: August 22, 2025What would it take to actually read someone’s mind? Neil deGrasse Tyson and co-hosts Chuck Nice and Gary O’Reilly explore the science and ethics of decoding thoughts with Jean-Rémi King, a neuros...cience researcher at Meta’s Paris lab. NOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here: https://startalkmedia.com/show/mindreading-with-jean-remi-king/Thanks to our Patrons Eeshan Londhe, John Strack, Emmanuel Michaca, todd hauser, Justin Belcher, Gabriel Cuadros Caceres, Swaglass, Jon B, John Chase, systemcall, Jim Togyer, Darren Littlefair, Tim Rosener, Duygu Guler, shoulderutube, Kyle Telfer, Carol Cherich, Eduardo Lobato, Aladin, jlayton21, melissa prien, Ben, PuerFugax, LadyGemini, Holly Williams, Dr. Spin, Brent McAlister, Jonathan Hughes, Robert Hartman, James Tulip, Sleepy Blulys, Megan Childs, Esteban Pérez, Rodger Gamblin, Reka Royal, Nicholas Mckenzie, Damon Friedman, Joshua Hemphill, Nadia, Gregory Meyer, Jonathan Bassignani, Kellyn Gerenstein, Jahangiri, Halimah, Tomaz Lovsin, Michael Tombari, Andrei Mistretu, FelicitousFeild, ayadal, nelly, and Josh Christensen for supporting us this week. Subscribe to SiriusXM Podcasts+ to listen to new episodes of StarTalk Radio ad-free and a whole week early.Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus.
Transcript
Discussion (0)
Gary, you're taking us inside the brain again.
I know, it's the inner space, and it's fascinating.
Is it as fascinating as outer space?
You'll argue it's not.
I knew you were both going to say that.
Are you reading our mind?
Absolutely.
Okay, an expert tells us the future of AI and reading your mind.
Coming right up on StarTalk, special edition.
Welcome to StarTalk.
Your place in the universe where science and pop culture collide.
StarTalk begins right now.
This is StarTalk.
Neil deGrasse Tyson, you're a personal astrophysicist,
and I see to my right, Gary O'Reilly,
that must mean it's special edition.
Yes.
Gary.
Hey, Neil.
How you doing, man?
I'm good.
Former soccer pro?
Allegedly.
Allegedly?
No.
Are you better here than you were when you were playing soccer?
As I get older, I do get better.
Good answer.
Chuck, as you get older, you get.
get older. That is what happens
to me.
That's about it.
So I'm looking at the title you propose.
Reading your mind.
Oh. Yes. And I thought this was a science
show. I know.
Start the seance now.
Right. Okay. I'm getting an
M. An M. There's a, you have a relative
somewhere. Somewhere in the
hemisphere. Right. You had a mother.
Okay, sorry
That's just that you're not
All right
AI will be driving our cars
Our trucks, our trains
Soon enough and probably if not already
It will help us solve our everyday problem
It already is
Exactly
And it'll probably solve some of our big problems
It may even help us tidy up
Some of the mess we've made over the years
But surely it's never going to be able
To read our minds, is it?
Hmm.
Well, actually, yeah, it can.
And our guest today leads a research team using AI
to decode the language of our brains.
But before you start shouting at your devices,
stop and think about the positivity that could come with this as a tool.
But those who can think but not speak, who will get a voice.
So for that, and if that happens, that would be truly amazing.
And the ethics of that too.
Absolutely, that's what I'm talking about there.
So if we would introduce our guest.
I'd be delighted to you.
Thank you.
Jean-Rémy king.
Ooh.
Jean-Lamee King.
Oh, you're going to be saying that for hours, aren't you?
From Paris, did I say all that right?
Absolutely.
Perfect accents.
Welcome to StarTalk.
Welcome to my office here at the Hayden Planetarium.
Thank you very much for having me.
And you work for Meta.
That's right.
Facebook, basically.
but...
Absolutely, yes.
But Meta, I mean...
I think it's not just one singular.
It's not a thing anymore.
It's Meta.
All right, yeah.
All right, you work for Meta in Paris.
You have a background in neuroscience.
I love neuroscience.
We have neuroscientists on this show all the time.
We really do, yeah.
We're all in the situation when we have a neuroscientist.
And describe to us what your goals are.
Aside from world domination.
So we have a lab at Meena, which is called Fair
for fundamental AI research,
which is structured as an academic lab, in a sense.
The goal is really to understand more about the principles
of artificial intelligence.
And within that lab, I'm working with a team
that interfaces two disciplines, neuroscience on the one hand,
and AI, on the other hand,
try to both better understand how the brain works
and also try to perhaps improve AI algorithms
in light of these principles.
How do you have any idea at all?
how the brain is processing information.
So we have tools for this, of course, in your own science.
Tools?
Interesting.
Tools do you put on people's brains?
This is not hammers and chiseless.
Tools, that's an euphemism for something, and I want to know what.
Sure, yeah, you have really a wide battery of tools that you can use.
The one that we typically...
On human brains.
On human brains, yeah.
So the one we tend to use the most in the team are non-invasive neuroimaging techniques.
So from magnetic resonance imaging, like the big scanner you have in hospitals, to electron cephalography.
These are the small nets that you can put on people's heads.
It's little caps that you put on your head with all of the...
And how does that work?
Electroids.
It looks for fields, electromagnetic fields that come through your skull?
That's right.
So each of those work with different principles.
So for EEG, for electron cephalography and M-EG, the magnetoidal and...
magnetal encephalography, you measure the fluctuations of electric and magnetic fields,
which are elicited by neuronal activity.
Thoughts?
Yes, the biological incantiation of pulse, yeah.
So is every brain...
Well, yeah, precisely what?
I said, I thought, you know, the biological in...
What's the word?
Instantiation, yeah.
Incensation.
Of thought.
So does that mean that every action in the brain has an electrical counterpart,
part or so like the firing of a synapse, is it actual electrical, you know?
Actually, you have a lot going on in the brain which is not electric or doesn't lead to
electric fields. In fact, even the neurons which are firing, not all of them are being measured
with EEG or MEG. And we tend to only measure those that are spatially aligned. So in the cortex,
which is the part of the brain, which is folded, you have a lot of new. You have a lot of
neurons, which we call pyramidal cells, that tend to be positioned in the same way.
So when they discharge electricity, the electric field can build up of a space because they
actually are aligned spatially.
So it strengthens it, the signal.
Yeah, if they were facing any direction.
Get some canceling.
Yeah, you'll have noise.
You'll average down to zero, basically.
But because they all aligned with one another, then you can measure these electric fields
at a macroscopic level, even with electrodes that are positioned.
on the scalp, so not inside the brain.
That's amazing.
So you're in an fMRI and you're offering images.
Functional and magnetic.
Exactly, yes.
So that's like you are actively, you're awake,
talking to the person while they're messing with your brain.
Well, they're not messing with your brain.
They'll offer you an image and that then gets picked up through the data.
But while you're offering an image to a patient, there's other noise.
Well, you declared something he hasn't declared yet.
Can we get him to say it first?
Okay.
When you read the brain, what do you see?
We see a lot of noise.
But maybe just...
Yeah, I didn't say my brain.
I say when you read a regular brain, what do you see?
You see a lot of noise.
But just a clarification on the fMRI.
So, fMRI is a different type of technology that does not pick up electric and magnetic fields
like EEG and MEG.
He actually picks up a proxy of neuronal activity, which is the deoxygenation or the blood flow in the brain.
in the brain. So when neurons are active, they consume oxygen. And so you have a change in the
vascular flow, which you pick up with fMRI. So you're getting the geography of the brain
as to what's happening and where? Absolutely. And this is a very different type of signal that you
would measure with EEG and MEG. And it's very slow. So of course the blood comes only like
it doesn't change every millisecond, let's say. And so you have a very different type of signal that you
would observe, depending on the device of choice, whether it's fMRI or EEG or MEG, or intracranial
recordings when you can have access to this type of signals.
Intracranial means you actually have probes inside the brain? Inside the brain?
Absolutely. So this is very common. And people say, go ahead and do that.
No. So in, you do have patients, typically patients who suffer from intractable epilepsy,
who needs to have the part of the brain.
brain which generates the seizures to be removed.
And before doing this, it is common to have a procedure where you, well, the neurosurgeons
and the epileptologists decides to put electrodes inside of the areas, which is believed to be
pathological, in order to be sure that this is indeed the brain region that should be removed.
Right.
You don't want to cut out the wrong part of the brain.
Absolutely.
Right.
And so these individuals typically would stay about a week in the hospital during which these
signals can be analyzed by a neologist, and during that week, you can ask them whether they
would like to participate in, for instance, an experiment that involves, I don't know, story
listening or watching a movie.
I mean, we're already in your brain, so why not?
I mean, we're already in here, you know, it's like when the mechanic goes, listen, I've got
to go in there anyway, so I might as well get the calipers done on the brakes, you know.
Yeah, right.
So when you're decoding the brain waves, whether it's blood flow.
or the magnetic fields, and you said there's noise.
How is your algorithm filtering out that?
And how is it breaking down?
Because you said there's different data.
The way the data comes is different from an fMRI that it does from an MEG.
So can you explain how the algorithm is reinterpreting that?
Sure.
So maybe just to start with, the reason why I said that when you look at it,
it looks like noise, is because these signals are impacted not just by neural activity,
but by a lot of different factors.
So, for instance, magnetic fields are constantly evolving.
I shouldn't try to say this in front of you guys
because you know more about this than I do.
But we are in a flux of magnetic fields all the time.
And the magnetic fluctuation that are being generated in the brain
are extremely small, orders of magnitude smaller
than the objects that surrounds us
and move around when they have methodic parts.
And so the signals that can be picked up
are basically contaminated by all of these things.
So when you look at the broad data, it's very difficult to guess anything, actually.
You would probably need to start to do the very same task again and again
to try to average out the noise and start to see what is the average brain response.
So you're really looking for patterns than anything else.
Then what better than use AI to recognize patterns.
That makes perfect sense.
Let's back up for a minute.
I understand you can look inside someone's brain and see the image that they're seeing.
as though you were somehow their eyes behind what the brain processed.
Do I understand this correctly?
The goal is to try to understand how the brain represents perception.
In the case of this experiment you're alluding to,
the individuals are typically watching images one at a time.
Each image is lasting for about a couple of seconds.
Do you do this on mice before you did it on humans?
Well, you only saw a big chunks of cheese.
Are you saying this because I'm a French researcher?
We do not work with non-human animals in our team,
but of course in neuroscience you have a wide variety of approaches
and a lot of people are indeed working on the visual system.
Rather, in macaques and mice,
mice are not so great for vision.
But yes, they are a lot of different paths, if I remember correctly.
Well, I'm not an expert in this,
but I think they do see things,
but they don't count on vision as much as we do.
Right, right.
So what you're really doing is you're measuring these signals
as a person is seeing something.
And that, what you're measuring, once you filter it,
you're able to determine that this is the pattern.
And if we match that from person to person,
where are you measuring against is really my question.
So you really have two types of things, right?
You have the images that you present to the participants
and you have the brain responses.
to those images.
And the whole goal is to try to find the linking function between the two.
Okay, so you can use the same person, actually,
and just replicate that over and over again.
If you keep seeing the same pattern,
then you know from this pattern that represents a sports car
or a this or a that.
So you don't have to...
You're mapping the signals.
You're mapping the signals.
Right, right.
Absolutely.
So wait a minute, here's the real rub, though.
the human brain varies from person to person in, not in its general regional response to stimuli,
but it does vary in how we actually perceive things.
So how do you make sure that what you're measuring in one person is actually going to be what you're measuring in another person?
Like if I were to lose my sight, my occipital lobe would go, like,
dead, but
other parts of my brain would
take up that activity, and
so you would be measuring a completely different
data set because I'm
blind, but in my mind
I would still be seeing stuff.
I think you're
highlighting something which is actually an open question
at the moment, which is the
inter-individual variability of
neural representations.
Yeah, that's what I made.
And it's, so up to
recently, most of the
human neuroscience research was really trying to focus on what was common across individuals.
So typically the very sort of standard experiment is you take 20 or 40 participants like you and me
and you make them do a task for about an hour in the scanner and then you try to see whether
their brain responds similarly to the same stimulus. For instance, if you present half of the
images with faces and half of the images with houses, is it the case that the brain areas that
response to faces is similar across individuals.
And the result is that there is a surprisingly
common structure across individuals
in ways which raise questions.
For instance, you have an area in the brain
called the face physiiform gyrus,
which is an area that responds specifically to faces.
And this area tends to be located
in the similar part of the brain for every individual's.
Which is fine.
You can say, okay, maybe genetically this was pre-programmed.
have some neurons in the brain which are specifically tuned for this.
But it also is the case for reading, for instance, for orthography.
So if you present words, you can find that indeed some part of the brain are
specifically responding to letters or the letters that you know or the words that you know.
And this tends to be in a brain region which tends to be similar across individuals.
But this cannot be genetically programmed, right?
because words is something that emanates from culture.
This is a recent trait.
So trying to understand why the same high-level representations
end up being represented in the same place in the brain
is a major question.
Now, having said that, the field is shifting towards more and more focus on individuals.
And we do realize that indeed the representations are very specific
to some extent to individual brains,
and that so far we may have emphasized too much the similarity across individuals
and not paying enough attention about the individual specificities.
But if you have to calibrate against the individual for the individual's thoughts,
then you can't just come up to a stranger and know anything about them.
So we would, for instance, we would know that auditory inputs,
so sounds that comes into your ear, tend to be processed in the same brain regions at first, right?
It's not that the ear is connected to a random part of the cortex,
it tends to arrive ultimately in the primary auditory cortex
and this would be common to most people
except if you have brain regions or variety of pathologies
and that would be the same for vision
and that would be the same also for the sense of numbers
for instance if you have a sense of magnitude
this is typically hosted in the pyotro cortex
and this tends to be the same across individuals
but as soon as you want to get more specifics
you want to really try to get more fine-grained
level of representations, then this becomes really specific to individuals, and it's difficult
indeed to transfer the knowledge that we observe from one participant to another.
This is Ken the Nerdneck Zabera from Michigan, and I support StarTalk on Patreon.
This is StarTalk Radio with Neil deGrasse Tyson.
If we step back into the offering an image to a patient,
how accurate now is your algorithm in terms of replicating as much of that first image
and how much does the algorithm say, well, I'll take a calculated guess at filling in the blanks?
That's a very difficult question.
How many blanks are there?
Make it fill in, yeah.
Because the metric that we use for evaluating how well we reconstruct the images in this case is not well posed.
So if you take, for instance, a pixel level, you want to compare how good your image,
the image that you manage to decode from brain activity is compared to the true image.
You may get every individual pixel wrong because perhaps, I don't know, the color is slightly off
and the objects are slightly to the left or to the right.
and so you would have a very bad decoding metric
but if the image has the same content
if it's, I don't know, the true image has a horse
and you also decoded a horse
you don't want to say that this was a terrible reconstruction
you want to say well it's maybe not pixel accurate
but it tends to have the right concept
and so there is for now a difficulty
in even quantifying the quality of the reconstructions
however what is striking is to see that
when you have a lot of hours per participants
typically 20, 40 hours per participants of them just watching images in the scanner.
And you have a very good scanning technique, like an ultra-high field.
Yeah, this would be a huge amount of data for neuroscience, not for physicists.
The universe is bigger than your brain.
I was going to say, they're only mapping the entire universe.
Of which your brain is apart.
So once you have a lot of data pair individuals, then you can really start to reconstruct
what they perceive in a surprisingly accurate way.
However, going beyond perception
currently remains very difficult.
Okay, so if you've offered an image to a patient,
you get a certain set of data back,
depending on the subject matter of that.
What's the difference if the patient is asked to imagine an image?
Do you get a variance?
Seeing it through your senses?
Exactly, yes.
We're talking mind's eye for one of a better term.
Right. So in the case of perception, this is where the most progress has been made.
So when you watch an image or when you hear a sound, it is becoming increasingly easy to decode what the person has seen or has heard.
However, when you do the same type of tasks, but on imagination, you can get performances above chance level from a statistical point of view.
But frankly, it's not very convincing to anyone who don't want just to look at the stats and just want to see
reconstruction. And the reason for this is, well, there are two reasons. The first reason is that the
signal to noise ratio in imagination is much lower than in perception. So when you look at the brain
signal, on average, they are weaker when you try to imagine, let's say, an apple.
Some people have vivid imaginations, though. And I don't think we know this. I think this
trying to evaluate whether the people, for instance, who claim not to have any visual imagination
indeed do not have a representation that would be decodable at all.
Because I just learned days ago that a colleague of mine,
he went around the room that said,
picture an apple in your head.
Picture an apple.
Okay.
Picture an apple.
He can't picture an apple in his head.
Wow.
Right.
Is this some rare...
Not even the computer?
He cannot conjure an image on command in his head.
We all thought of Apple, red apple, green apples, but any image on demand.
Well, he used that as a simple one, but I, so I didn't know this was an issue.
Yeah, I think this is actually quite common.
I am not an expert in this, but I think the term is aphantasia, I think.
This is something which is more than 5% of the population, I think,
that claim not to be able to imagine visually objects.
So they don't become artists?
I don't think artists are restrained to just imagining objects in the head.
you have musicians that may not engage in this monologue.
How much more in terms of a percentile do you think your research is going to take
to how the brain interprets images?
This is a very difficult question.
Again, the...
Sorry.
The question was easy.
It's your answer that's difficult.
Probably, yeah.
I don't know about our research specifically,
but what is clear is that there is a huge progress
which is being made in, thanks to AI,
but not as a tool like you would see in other sciences.
So, for instance, in, I don't know, in biology, in cosmology,
in sciences where you have a lot of data, you use AI as tools.
You have a lot of numbers, you don't know how to crunch them.
You train a system to do whatever you're looking for,
and it helps you process this data.
In other science, we also do this,
and the pattern matching that we discussed earlier,
but we also use this as a modeling framework
because the AI system in a sense
is also trying to do something that we do.
We train AI system to perceive the world,
to try to recognize objects,
to reason upon the world,
to discuss with us in a linguistic form.
And so this creates basically systems
that can then be used as models
of how the brain works.
This is really accelerating
I think the understanding of
how the brain functions.
So you talked about linguistics there.
If you presented a sentence to a patient,
then you're going to have that sort of perceptual stage
of where they perceive what the sentence,
they see the sentence,
then you go through what they call a lexical stage
and then a contextualization stage.
That all makes sense.
Good.
I mean, that's basically how we communicate.
I know, but are you able to get the algorithm?
I don't like seven-syllable words, though.
Are you able to get the algorithm
to feel the nuances
of the brain
and actually see how that breaks down?
Is that just the future?
I'm waiting for this answer.
Maybe I can say
how we do this in the first place, right?
So we can have individuals like you and me
and I'm often a subject of my own experiments
going into the scanner
and reading a sentence, right?
And so you flash a sentence
word by word once upon time and so forth.
And for each millisecond you can see, okay, what is the brain activity now?
What is the brain activity now?
So you end up with an activation pattern associated with each moment of time and that you
can time lock to words or to syllables, phonemes.
And then you can do this same approach in the AI algorithms.
You can present a sentence and deep learning networks nowadays have activation patterns inside
of them, which are known to be difficult to interpret.
But nevertheless, we can do the same trick.
We can time-lock the activations of the deep nets in response to words, syllables, and so forth.
And then we can do the comparison between the activations of the AI systems to the activations
of the brain.
And we don't know what these two things represent, but we can still try to do correspondence,
to try to see whether they tend to be similar in the geometrical structure that they hold.
And what we observe is that this helps us decompose the stages of processing that you mentioned.
So we can first see that you have algorithms that are trained to do visual processing,
but know nothing about words, about language, that you can map and correspond to the activations of the perceptual system.
And then you can do the same type of comparison with an algorithm which this time is not trained to recognize images or pixels or to transform pixels,
but it's trying to analyze words and combine them together.
And you will see that the activation patterns of these algorithms
that are processing things at the language level
and not at a perceptual level,
they do have activation that corresponds to other brain regions
and other time moments.
And so we can try to do this sort of one-to-one correspondence
between the model and the brain
to try to understand the structure of these representations.
And where exactly in that process do you get the...
language model to, I'll say, mimic perception and the nuance that we have, which is
experientially based. So when you look at once upon a time, there is an activation
pattern, right? Right. And you can replicate that activation pattern in the AI. But what
you can't do in the AI is replicate all the different things that
Once means to you.
I went to the movies once.
Really, only once you went to the movies?
Once upon a time.
I know that as the beginning of all fairy tales.
So it brings in a completely different contextual meaning.
So where along that line of comparison do you get to interject what we do that machines don't,
which is intuit and find nuance?
Right.
That's a great question.
And maybe I should emphasize one thing,
which is that when we do this comparison,
we don't actually train or tune the algorithms to resemble the brain.
We don't actually try to inject this knowledge.
We just have these AI algorithm that we can use off the shelf,
open source models, either produced by our colleagues
or by the rest of the scientific community.
And these algorithms, they're not trained to mimic the brain.
They're trained for whatever other purposes,
to be chatbots and to recognize cats from dogs in images.
but what we observe empirically
is that training these algorithms
tend to make them
generate representations
which are comparable to those
of what we do in our brains.
Okay, first of all, that is scary A.F.
I mean, it's fascinating and it's
really cool, but it's also kind of
scary. Tell them what A.F. means.
It's scary as
but the reason why it's a little
scary is because
on the one hand, it kind of diminishes
us as this crowning
jewel in all of creation
with the zenith
of intellect that we believe
that we hold. Wait the zenith of anything.
Right. That's what, yeah.
Couldn't we do with a little bit of humbling
every now and again?
I don't know about you.
No, no, here you go. Here's how you get out of that.
Here's how to get out.
We are so brilliant.
Right.
We created something more brilliant than ourselves.
So I wouldn't say this quite yet
because AI is really limited in many ways.
today in spite of the hype.
I understand the emotional reaction,
but frankly,
I also think that there is a source of marvel here, right?
For the first time we have AI systems or systems
that we trained for a task, right?
The task is surprisingly arbitrary or even mundane, right?
For instance, trying to predict the next word given
the preceding words, like that sounds like.
I mean, that is what all LLMs do.
Exactly, yeah.
Large language models.
Thank you, yes, large language models.
And this simple task pushes the algorithm
to generate hidden latent representations,
which resembles those that we have in our own heads.
And that suggests something to me, which is very profound, right?
They exist general principle that push these systems,
biological, artificial systems,
to generate a similar computational path,
a similar set of representations.
So is there a similarity between the brain
and how it processes?
data and its architecture, and that of a large language model,
that it's learning in a very similar way to the human brain.
Because as I understand it, the original idea of neural nets as invoked in computers
was an attempt to mimic what we thought our brain wiring was doing,
and we learned that that's not really how our brains work.
So it's just dangling there now as its own thing with its own utility,
but it's no longer biologically analog.
Yeah, so the history of AI neuroscience intertwined quite a lot, but for a long time it was, these links were metaphorical.
Like the idea of a neural network was, it wasn't, I think, a useful concept, but the goal was not to be as close to the brain as possible.
In fact, it's really a huge simplification, this idea of artificial neurons, as compared to what was already known at the time.
And you've had these bridges between the two disciplines for many decades.
What is different now is that this comparison is not just like conceptual kind of loose.
It's very precise.
We can't quantify the extent to which the activation patterns in the brain
and the activation patterns in these AI systems do look alike or not.
And even though these systems are not built for that purpose.
Now, having said that, I also feel them.
needs to mitigate this results because this is a tendency that we have, but we also see a lot
of edge cases where this does not work. So typically if you take the very best model, the
largest model, this similarity tends to break down. So we do have cases where the, what we call
the convergence of representations between AI systems and the brain is not monotonous, it's not
systematically the same. All right. One thing I haven't sort of got to grips with the speed
at which image, back a brain,
and how quickly that is
and how quickly you're able to then process
that data back through an algorithm?
In the head?
In the head, it's quite slow, actually.
So when you look at reading, for instance,
you flash a word onto your retina.
This takes for about 70, 100 milliseconds,
to really blow up the visual cortex in the occipital lobe.
And from there, you'll get another 50 minutes,
for this visual information to be processed.
The millisecond is a thousandth of a second?
A thousandths of a second.
So 50,000th of a second, it would be five hundredth of a second.
Yeah, that's correct.
I'm not good with math, especially not in my native language.
We're messing in my head if you're not good with me.
But so, yeah, around 100 milliseconds, this is really when the activity peaks in the visual cortex
for the sensory processing, let's say.
And then this information is being analyzed into...
into edges that will eventually construct the representations of letters and of more themes of words.
And this is around 200 milliseconds, so one-fifth of a second.
And then it takes another 200 milliseconds.
So around 400 milliseconds does the semantics parts of words really rise and rise in the brain
and is broadcast to a wide variety of brain regions.
And so this process is relatively slow.
It takes about half a second for you to analyze.
How fast would machine learning do it?
Or, let's say, an OCR, how fast would it know?
Yeah, in terms of inference, the machine would be much faster.
It would be just a few milliseconds.
A few milliseconds to do the whole process, and we take, like, half, a full half a second.
Absolutely.
So we're basically like, duh.
Well, at the inference stage.
So what we...
Stop giving AI ideas about what to do with us when it becomes our overlord.
What is fast is what we call the inference stage, right?
So once you already train the algorithm, using it is actually very fast.
What is typically slow is loading the information onto the graphic card.
But once it's there, it's actually very fast.
However, training these algorithms is ridiculously slow, right?
If you want to train an LLM today, a large language model today,
you need trillions of words, which represents many, many, many lifetime of just reading
all of the texts that we've created in humanity.
That takes us back to what we were talking about earlier.
So in order for it to know, it has to see all the words.
In order for us to know, we just have to see like a word
and then something similar, and we're like, oh, yeah, it's that.
You know, so that's what we're, for instance, a ball.
If you show us a ball, you can show us one ball.
We've never seen a ball before in our life, and you show us a ball.
And then you show us a basketball, and we'll say,
a ball, and then you show us a baseball, and we'll say, that's a ball.
But the machine is like, well, I have never seen that before, so that's the difference.
Yeah, this is one of the many differences.
I mean, when we emphasize similarities, we emphasize similarities because we are in the field of differences.
Everything is different.
The architecture is different.
The type of data that they receive is different.
The training, of course, the physical and sensation is different.
This is also highlighting why we are all.
all the more surprised and interested in the fact that in spite of all of these differences,
we can still find similarities in the way they process information.
Wow.
And you've got, when you've got these sort of caps that you're putting on,
they're sensitive enough to be able to operate at that sort of speed.
I know you say it's slow, but that for me is really quite fast.
So it depends on the device.
So with functional magnetic resonance imaging, fMRI,
you get a snapshot of brain activity approximately every two seconds.
so a lot can go on within two seconds.
However, if you take magnetons, a photography,
you can get a snapshot every millisecond.
So you'll get a much more well-resolved signal in time.
But the spatial resolution now is much lower.
So you tend to have a blurry image, let's say, of brain activity.
So you have a trade-off between these different technologies.
Wow.
So it's kind of like cosmology.
The better your looking tools get
the easier
It's just going to be so much easier
for you to figure out
anything you need to figure out
it's just a matter of
we've got to be able to see it
faster and then no more clarity.
I got a question.
When I think of the brain,
I think of it as organ
and there are these parts of the brain
that are similar
from one person to another
even if there's differences in detail.
Are we to believe
that the brain knows that in advance
how it would divide up its territory?
or are we all just socialized the same way?
We all grow up in a civilization,
and so we all have the same influence on our developing brain
for it to take the shape the way it does.
So this is a very profound question.
There is a tension.
I'm not a historian of science,
but there is a tension in the field that dates back to philosophy
between empiricists,
people who think that the structure of representations
come from the data to which you are exposed
and rationalist, the people
who would rather emphasize
the importance of innate representations
and innate structures.
So you have, for instance,
Plato, on the one hand,
if we take this back to ancient Greece,
there would really be in the rationalist point of view
with this idea that they exist
innate representations in ourselves
and ultimately we can approximate them
with reasoning.
Whereas other people,
and I think the whole study of AI
is really on,
the extreme empiricist side is just let's take a blank system and just press a lot of data
onto it and ultimately it will manage, this system will manage to perform a task. And what is interesting
nowadays is to see that irrespective of whether the representations are innate or acquired
through exposition, not necessarily culture, but even just sensory data, they seem to at least
have some similarities. This is what I think is interesting in the case of this comparison
between AI and the brain for language.
The brain is obviously
structured very differently
to these AI algorithms.
And obviously there must be some innate
structuring in our brain.
This is why only human have language
in a sense of being able to combine words together
in order to reason and to communicate.
This is not an ability.
You remind me of a New Yorker comic
two dolphins swimming together.
Right.
They're in a water show.
Oh, okay, like the Sea World type.
You see, well, they're swimming together, and one says to the other, of the humans.
Right.
They face each other and make sounds, but we're not sure they're actually communicating.
Right, that's pretty funny.
But there are a lot of experiments.
Yeah, brain's bigger than our brain.
For some of them, not all of them.
But so the reason why there is, I mean, there's been a lot of experiments on behavior with dolphins, but also with apes,
to try to see whether they would be able to combine concept.
And there are some experiment that show that in some edge cases, they are able to do this.
But for now, we don't have any evidence suggesting that you have any other species that can learn this vast amount of concept and be able to combine these concepts together in order to produce a sentence or to understand a sentence or new meaning that they've never heard before.
So this ability must be, to some extent, input in our genome and be an innate structure.
Well, I have to be.
I mean, we're the only, and it's so funny because it's disassociated from everything else that we are and have language.
For instance, I can be deaf, dumb, and blind, and you can teach me any language.
I don't have to have an actual reference like everybody else does.
So, I mean, we are truly unique in the way that we do communicate.
I don't know if, I mean, I'm sure other animals communicate too.
I'm not sure.
Yeah, all animals communicate.
But we're very unique in that
if I don't know how to communicate
with somebody I meet from halfway across the world,
we will find mediums that allow us
to know each other's language.
Right.
And this is coming back to the whole
empiricist versus rationalist tension.
This is why there is something very interesting here.
So we established that the human brain
must have some genetic or innate properties
for it to acquire language.
This is why it differentiates itself in part from other animals.
And we also know that these deep learning algorithms,
they have very little what we call inductive biases.
The architecture that we use in deep learning,
they are remarkably blank and versatile.
And so it's really the data with which they are trained
that pushes them to build the representation that they have.
And nevertheless, it seems to be comparable,
at least to some extent, to those of the brain.
Not in every way, but in some ways.
And so that suggests that no matter where you come from,
whether you come from this really rationalist type of approach
to cognitive science or much more from an empiricist approach,
this seems to be some sort of convergence between these two approaches.
What I want to get into now is the application of your research,
where it could go as we progress with this.
Now I said in the opening about people who can think but can't speak,
is there an opportunity with this research to give them a voice,
to have their understandings made public, made aware?
Right. So you have indeed a lot of patients who suffer from an inability to communicate
typically because of a brain lesion, so either a traumatic brain injury or an oxia
that will lesion the part of the brain which is responsible for, for instance, motor control.
So they will be paralyzed or lose an ability to move their facial movements.
And there are now a few teams that have shown that it is possible to put a set of electrodes in the motor cortex
and to use these neural signals to feed an algorithms that can then be used to do a brain-to-text translation
and allow the individuals to regain communication abilities.
So this is already something which is happening with invasive approaches.
So with electrodes, which are implanted with neurosurgery.
One of the goal, of course, is to try to see whether it would be possible to push this approach
with non-invasive devices, which do not require a brain surgery in order to rehabilitate
communications in patients, but also perhaps to diagnose.
So sometimes you have patients who do not respond, but they are awake.
It's a paradoxical state which occurs sometimes after a coma.
And in these patients, you want to know whether they don't communicate because they're not conscious of the environment or whether they don't communicate because they are fully paralyzed, for instance.
Well, they just don't like you.
And perhaps they just don't want to, which is actually an issue, right?
If you have lesions to the part of the brain that are intrinsically links to motivation, that could also be a cause of a lack of action.
I'm tired of talking to you.
I'm just done.
And so for these patients, having devices that would allow us to, well, allow them to communicate,
but even to allow us to know whether indeed they are conscious or not of the environment
is certainly of a prime use, yeah.
Will you likely to find that sort of ability in the near future, or are we having to wait?
For invasive electrodes, this is already happening.
You know, our boy, Elon, that's what he wants to do.
Neurlink.
Yeah, I want to put a chip in everything.
about it.
Everyone's...
You can do that
through the vaccines.
Well, no, that's Bill Gates.
Let's get our billionaire street.
Okay?
Elon wants to put a chip in your head.
I mean, an electrode in your head.
Bill Gates already did it.
I think the limits of the technology, as you know, has pointed out,
will be reduced because of AI, and they'll find solutions sooner.
But it's the ethics of being able to potentially sort of decode the brain's messages
and then reverse engineer it so as you can read someone's mind.
it's the ethics of that being possible
because I think that's going to freak
not just chuck out.
You know, I'm going to be honest, though,
isn't that already happening
when you look at metadata
that's taken from our phones
and our location
and other phones that are around us,
couldn't you pretty much tell what I'm thinking?
Well, I don't know,
but what I can say is
these are certainly topics
that come up very often
and there are several things to say.
The first thing is that what is possible today
in terms of decoding brain activity
is really limited to specific cases
like perception and motor control.
And the reason for this is because we know
what the person sees, so we can attach the image
to the brain patterns.
However, as soon as we try to do this in imagination,
for instance, as we mentioned before,
then things becomes drastically more difficult,
not just because of an inability of the algorithm
to work,
because a signal is just not there.
That means it's not likely that you will anytime soon read someone's dreams.
Until you get a signal booster.
From a statistical point of view, for fundamental research,
there is research on the science of dreams.
However, there is, all of the evidence point out to the fact that it will be very difficult
with the state of knowledge that we have to have a device which can read your mind
in the way that people think,
like with your train of thoughts and all this.
And the reason for this is because
even with the largest multimillion dollar
type of devices that are being used,
signals remain extremely noisy,
and it's very difficult to go beyond this.
So the physics of the signal that we pick up
is really the main constraining factor,
not the AI algorithm part.
So the AI algorithms can be used as a useful tool,
but in terms of the signal that you can pick up,
You can't generate the input necessary for them to do a good job.
Yeah, the data that can be collected with these devices remains extremely, extremely noisy.
And so from that point of view, the risks seem limited.
Now, this is the current state of affair.
But our role here as a scientist is also to say what is possible, what is the state of the art,
and to share this through the research, through open-sourcing and all this.
That's the reason why we do this work openly.
In science, you're always limited by your signal to noise.
Absolutely.
The signals, you'd have to add up days, weeks, months of measurements to pull a signal out of that noise.
But you want to have then, this only works if you have the same signal that comes up again and again.
Whereas when people think of mind reading, they think of reading the mind at a given instance.
You don't think of the same thing again and again and just repeat this until your noise averages out.
So this is why currently all of the evidence such that there is not a systemic risk.
However, technology continues to evolve, and we want to make sure that the risk are limited.
And this is also why we engage in these kind of discussions, of course, to ensure that the discussion does not just happen within the scientific community, but with the rest of the...
So when is the time to make that determination?
Is it now before you actually have the equipment to measure this?
What determination?
The determination as to the evidence.
ethics, like codifying the ethics themselves.
Guard rails going forward.
When do you come up with those guardrails?
Because if you come up with them after you're able to do it,
it's the, you know, the barn is, the horse is out of the barn, as they say.
Yeah, absolutely.
So this has already started, right?
There is already a lot of regulations on what you can and cannot do.
For instance, I work in France, and so we have the GDPR in Europe.
It's constrained the way the data that is being collected from brain imaging can be used.
In France, for instance, you're not even allowed.
to do neuro-marketing.
You're not allowed to use brain data
for marketing purposes.
So this discussion is obviously already engaged,
and along the way, we need to continue
and update these decisions
with the state of knowledge that continues to evolve, yes.
What was the movie, Minority Report,
where they had this sort of scenario.
Precogs.
Yeah.
Precogs.
It's a great movie.
That's everyone's default thought
as regards this research
on where it leads to.
And I think that's what scares them,
and I think they'd be grabbing, not just for the guardrail.
I mean, do you look to...
Well, pre-cogs, you were not digging out of their head
what they saw from the past.
You were digging out of the head
what they foresaw in the future.
Right, so that was different.
So they would see you committing a crime
that you haven't committed yet,
but you were definitely going to commit.
And then they'd just go arrest you.
Right.
They started doing that, the crime rate went to near zero.
It's kind of like immigration in America right now.
Oh, how interesting.
You pre-arrest people.
You just pre-arrest people.
So what is this?
the endgame for Meta in this?
I actually don't know why
Meta hired me in the first place.
I can only tell you what we're trying
to do within our team.
So the goal here is
well, the goal is
well posed, right? We have
now some preliminary evidence
suggesting that you have
similarities between
AI systems and the brain.
And that suggests something which to me
is very intriguing, that there exists
these general principles
that shape the information processing in AI system and the brain.
So discovering what those laws are
and also trying to understand what is missing in AI systems
for them to be as intelligent, as efficient as us
remains a major topic of research.
So this is why we're pushing on this frontier
to better understand the brain and make better AI algorithms.
So if you're able to achieve that,
people are going to feel an invasion of privacy,
they're going to feel thought security
becomes potentially compromised.
I mean, you said there's discussion
over the ethical point of view.
Are we looking at those sort of features as well?
So, yeah, it's the same topic
that we briefly discussed before.
We have an ongoing discussion
to try to see whether AI and neuroscience developments
are changing the risks associated
with, for instance, mental privacy.
As of now, the discussion is ongoing,
but I don't think we have a change
in a technology that changes the risk.
What we observe is that it is possible to decode brain activity in certain cases,
typically for motor control or for visual perception.
But it is not possible to decode what you are thinking at a given moment
or your train of thoughts or to extract your password from brain activity.
The reason for this is because the signal that we have.
It just takes the password out of your head.
Right.
Like the readers that they have now that steal your credit card.
Right.
That's radio frequency.
All of the physics on which you have,
The physics on which we base these analysis prevents us to work outside of the lamp, right?
So with an MRI, you need to plunge someone into a very high magnetic field.
This is not something that can be translated for, I don't know, consumer products.
But what you could envision is a dystopic future where the state who has the power and the money
to actually have a machine that could read your brain and during an interrogation,
extract information from you
that you don't want to give up
you know basically like
you violated my mental privates
so you know
that's actually foreseeable
based on just what we talked about today
yeah I mean if we if we go down
to the dystopic possibilities
I suspect that the states
will not need an MRI to force you
to give away your password but but it is an important
good point
I'm just like I'm not going to your MRI
I'm seeing, I refuse, just like, yeah, okay, yeah.
This baton says different.
But still, if the risk does exist, we should try to characterize it to ensure, like, what is the past to that's risk.
And this is part of the scientific enterprise, too, yeah.
Cool, man.
You've got some new research that you're about to release into the public domain.
Can you sort of expand upon that for us, please?
Sure, yeah.
So far, we've done this comparison between.
AI systems and the brain with adult participants.
And to some extent this is frustrating because there is something
which is missing here in the picture, which is the learning process, right?
So in the case of language,
we don't just want to understand how the brain process language,
but we want to understand what makes it able to acquire it so efficiently.
Like we just a few words we acquire language.
The average number of words that we hear is typically around a few
thousands per day, a few dozens of thousand per day.
And if you compare this amount of data to the data which is input for the training of
AI models, this is really a droplet of information compared to the oceans of data that
these algorithms use.
And so what this means is that fundamentally the architecture or the training principle
that we use for AI, they are really, really mediocre, right?
We need to understand much better how you can get to a system that learn.
learns much more efficiently.
So if you train your AI on children,
you may end up learning
how we actually learn or acquire language,
but then you're also going to have AI saying things like,
I hate you so much, I hate you,
you never let me do anything.
But certainly it would be important to understand,
not necessarily to train AI models with this data,
but to understand the principles
that allow young children
to acquire language so efficiently
this is one of the big marvels
of our species and this is certainly
what we try to understand.
So this is actually a work
with a hospital, the Rochstein Hospital
in Paris, that has a unit
for epilepsy and young patients
down to two years old patients.
So you have these patients who suffer
from intractable epilepsy. Again,
same patient as we mentioned before.
We have electrodes that are implanted
inside the brain in order to identify the location that is generating the seizure and who can
stay for about a week in the hospital and listen in that context to audiobooks. And then we can
time-lock the brain responses to each individual words to try to understand how the representations
of language are processed in these young patients and how this evolves with age.
Let me see if I can offer a perspective here. I'm as big a champion.
of AI as the next person, but I still enjoy being human and whatever I can do to distinguish
being human from a machine, I will embrace, leaving me to wonder whether the true creativity
of what it is to be human may actually lurk within the noise that can be never read by a machine.
the first person to paint
an impressionistic representation of reality
could a machine have had that first thought
or that a human being
rummaging within the noisy confusion of our own brains
pulling out something that no one had done before
and no one had imagined before
and in the end genuinely creating
that which is human
and can never be machine.
I just wonder, that is a cosmic perspective.
And that was beautifully said,
except the first impressionist
was just some dude who was nearsighted.
It was all just fuzzy.
It's just how he saw the world.
People are like, what an incredible interpretation.
He's like, what are you talking about?
So, Zahmi, thank you for visiting.
Oh, yeah.
All right, Chuck, always good to have you, man.
Always a pleasure.
Guy.
Yeah, Ray.
And Lane and others for coming up with these topics.
They keep coming up with them, so we're going to keep finding them.
Thank you.