TED Talks Daily - Can AI match the human brain? | Surya Ganguli
Episode Date: March 7, 2025AI is evolving into a mysterious new form of intelligence — powerful yet flawed, capable of remarkable feats but still far from human-like reasoning and efficiency. To truly understand it and unlock... its potential, we need a new science of intelligence that combines neuroscience, AI and physics, says neuroscientist and Stanford professor Surya Ganguli. He shares a vision for a future where this interdisciplinary approach helps us create AI that mimics human cognition, while at the same time offering new ways to understand and augment our own brains. Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
When you choose Athabasca University's online MBA program, you'll get more.
Experience more flexibility so you can pursue your degree while balancing work and family.
The AU Online MBA is designed to fit individual schedules so you can successfully complete the program from home, work, or even while traveling.
You don't need to leave work for a month, a year, or even every second Friday.
Choose a more flexible MBA and get more out of your education.
Learn how at Athabaskayou.ca slash flexible MBA.
This episode is sponsored by Cozy.
You know how daunting it can be to transform your living space.
Well, there's this Canadian furniture company called Cozy
that's aiming to make that process a whole lot easier.
Cozy is all about to make that process a whole lot easier.
Cozy is all about blending style with practicality.
Their furniture is customizable, so people can start small and add pieces as they go.
And get this, they've got this AR feature that lets you see how the furniture looks
in your space before you buy.
Pretty cool, right?
They've also launched the new Mistral outdoor dining collection.
It's designed for creating the ultimate patio setup with powder-coated aluminum furniture
that's both durable and easy to store.
Cozy offers free swatches and quick 2-5 day shipping.
Seems like they're really trying to simplify the whole furniture buying process, so if
you are thinking about giving your space a makeover, you might want to check it out.
Transform your living space today with Cozy. Visit Cozy.ca,
spelled C-O-Z-E-Y, to start customizing your furniture.
Support for this show comes from Airbnb. Last summer my family and I had an amazing Airbnb
stay while adventuring in Playa del Carmen. It was so much fun to bounce around in ATVs,
explore cool caves, and snorkel in subterranean rivers.
Vacations like these are never long enough, but perhaps I could take advantage of my empty
home by hosting it on Airbnb while I'm away.
And then I could use the extra income to stay a few more days on my next Mexico trip.
It seems like a smart thing to do since my house sits empty while I'm away.
We could zipline into even more cenotes on our next visit to Mexico.
Your home might be worth more than you think.
Find out how much at airbnb.ca slash host.
You're listening to TED Talks Daily where we bring you new ideas to spark your curiosity
every day.
I'm your host, Elise Hugh.
Our speaker today says to better understand artificial intelligence, we actually have
to better understand intelligence itself, biological intelligence.
In his 2024 talk, Professor of Applied Physics Suryak and Guli, compares how our human brains and DNA have evolved with the way AI has evolved
and thinks through the implications as AI continues to advance.
Okay, so what the heck happened in the field of AI in the last decade?
It's like a strange new type of intelligence appeared in our planet,
but it's not like human intelligence.
It has remarkable capabilities, but it also makes egregious errors that we never make.
And it doesn't yet do the deep logical reasoning that we can do.
It has a very mysterious surface of both capabilities and fragilities, and we understand almost nothing
about how it works.
I would like a deeper scientific understanding
of intelligence.
But to understand AI, it's useful to place it
in the historical context of biological intelligence.
The story of human intelligence might as well have started
with this little critter.
It's the last common ancestor of all vertebrates.
We are all descended from it.
It lived about 500 million years ago.
Then evolution went on to build the brain, which in turn, in the space of 500 years,
from Newton to Einstein, developed the deep math and physics required to understand the
universe from quarks to cosmology.
And it did this all without consulting chat GPT. And then of course there's the advances
of the last decade. To really understand what just happened in AI we need to
combine physics, math, neuroscience, psychology, computer science and more to
develop a new science of intelligence. This science of intelligence can
simultaneously help us understand biological intelligence and create to develop a new science of intelligence. This science of intelligence can simultaneously
help us understand biological intelligence
and create better artificial intelligence.
And we need the science now
because the engineering of intelligence
has vastly outstripped our ability to understand it.
I wanna take you on a tour of our work
in the science of intelligence
that addresses five critical areas
in which AI can improve.
Data efficiency,
energy efficiency, going beyond evolution, explainability, and melding minds and machines.
Let's address these critical gaps one by one. First, data efficiency. AI is vastly more
data hungry than humans. For example, we train our language models and order one trillion
words now. Well, how many words do we get? Just a hundred million. It would take us 24,000
years to read the rest of the one trillion words. Okay, now you might say that's unfair.
Sure, AI read for 24,000 human equivalent years, but humans got 500 million years of
vertebrate brain evolution.
But there's a catch.
Your entire legacy of evolution is given to you through your DNA, and
your DNA is only about 700 megabytes, or equivalently 600 million.
So the combined information we get from learning and
evolution is minuscule compared to what AI gets.
You are all incredibly efficient learning machines.
So how do we bridge the gap between AI and humans?
We started to tackle this problem
by revisiting the famous scaling laws.
These scaling laws have captured the imagination of industry
and motivated significant societal investments
in energy, compute, and data collection.
But there's a problem.
The exponents of these scaling laws are small.
So to reduce the error by a little bit, you might need to 10x your amount of training
data.
This is unsustainable in the long run.
And even if it leads to improvements in the short run, there must be a better way.
We developed a theory that explains why these scaling laws are so bad.
The basic idea is that large random data sets
are incredibly redundant.
If you already have billions of data points,
the next data point doesn't tell you much that's new.
But what if you could create a non-redundant data set
where each data point is chosen carefully
to tell you something new
compared to all the other data points?
We developed theory and algorithms to do just this.
We theoretically predicted and experimentally
verified that we could bend these bad power laws down to much better exponentials where
adding a few more data points could reduce your error rather than 10xing the amount of
data.
Let's zoom out a little bit, right, and think more generally about what it takes to make
AI less data hungry.
Imagine if we trained our kids the same way we pre-train our large language models, by
next word prediction.
So I'd give my kid a random chunk of the internet and say, by the way, this is the next word.
I'd give them another random chunk of the internet and say, yeah, this is the next word.
If that's all we did, it would take our kids 24,000 years to learn anything useful.
But we do so much more than that.
For example, when I teach my son math,
I teach him the algorithm required to solve the problem.
Then he can immediately solve new problems
and generalize using far less training data
than any AI system would do.
I don't just throw millions of math problems at him.
All right, so to really make AI more data efficient,
we have to go far beyond our current training algorithms
and turn machine learning into a new science
of machine teaching.
And neuroscience, psychology, and math
can really help here.
Let's go on to the next big gap, energy efficiency.
Our brains are incredibly efficient.
We only consume 20 watts of power.
For reference, our old light bulbs were 100 watts. So we are all literally
dimmer than light bulbs.
But what about AI? Training a large model can consume as much as 10 million watts and there's talk of going nuclear
to power 1 billion watt data centers. So why is AI so much more energy hungry than brains?
Well the fault lies in the choice
of digital computation itself,
where we rely on fast and reliable bit flips
at every intermediate step of the computation.
Now the laws of thermodynamics demand
that every fast and reliable bit flip
must consume a lot of energy.
Biology took a very different route.
Biology computes the right answer just in time
using intermediate steps that are as slow and as unreliable as possible.
In essence, biology does not rev its engine any more than it needs to.
In addition, biology matches computation
to physics much better.
Right, consider for example addition.
Our computers add using really complex,
energy consuming transistor circuits.
But neurons just directly add their voltage inputs
because Maxwell's laws of electromagnetism
already know how to add voltage.
In essence, biology matches its computation to the native physics of the universe.
So to really build more energy-efficient AI, we need to rethink our entire technology stack...
from electrons to algorithms, and better match computational dynamics to physical dynamics.
For example, what are the fundamental limits on the speed and accuracy of any given computation
given an energy budget?
And what kinds of electrochemical computers can achieve these fundamental limits?
We recently solved this problem for the computation of sensing, which is something that every neuron has to do.
We were able to find fundamental lower bounds or lower limits on the error as a function
of the energy budget, and we were able to find the chemical computers that achieve these
limits.
And remarkably, they looked a lot like G protein-coupled receptors, which every neuron uses to sense
external signals. So this suggests that biology can achieve amounts of efficiency
that are close to fundamental limits set by the laws of physics itself.
Popping up a level, neuroscience now gives us the ability
to measure not only neural activity, but also energy consumption,
across, for example, the entire brain of the fly.
The energy consumption is, for example, the entire brain of the fly.
The energy consumption is measured through ATP usage, which is the fuel, the chemical
fuel that powers all neurons.
So now let me ask you a question.
Let's say in a certain brain region, neural activity goes up.
Does the ATP go up or down?
A natural guess would be that the ATP goes down because neural activity costs energy,
so it's got to consume the fuel.
We found the exact opposite.
When neural activity goes up, ATP goes up, and it stays elevated just long enough to
power expected future neural activity.
This suggests that the brain follows a predictive energy allocation principle where it can predict
how much energy is needed where and when and
it delivers just the right amount of energy at just the right location for just the right
amount of time.
So clearly we have a lot to learn from physics, neuroscience and evolution about building
more energy efficient AI.
But we don't need to be limited by evolution.
We can go beyond evolution to co-opt the neural algorithms discovered by evolution, but implement
them in quantum hardware that evolution can never figure out.
For example, we can replace neurons with atoms.
The different firing states of neurons correspond to the different electronic states of atoms.
And we can replace synapses with photons.
Just as synapses allow two neurons to communicate,
photons allow two atoms to communicate
through photon emission and absorption.
So what can we build with this?
We can build a quantum associative memory
out of atoms and photons. This is the same
memory system that won John Hopfield his recent Nobel Prize in Physics, but this time it's a quantum
mechanical system built of atoms and photons and we can analyze its performance and show that the
quantum dynamics yields enhanced memory capacity, robustness, and recall. We can also build new
types of quantum optimizers built directly out of photons, and we can
analyze their energy landscape and explain how they solve optimization problems in fundamentally
new ways.
This marriage between neural algorithms and quantum hardware opens up an entirely new
field which I like to call quantum neuromorphic computing. Okay, but let's return to the brain where explainable AI can help us understand how
it works.
So now, AI allows us to build incredibly accurate but complicated models of the brain.
So where is this all going?
Are we simply replacing something we don't understand, the brain, with something else
we don't understand, our complex model of it.
As scientists, we'd like to have a conceptual understanding
of how the brain works,
not just have a model handed to us.
So basically, I'd like to give you an example
of our work on explainable AI applied to the retina.
So the retina is a multilayer circuit of photoreceptors
going to hidden neurons, going to output neurons. So the retina is a multilayer circuit of photoreceptors going to hidden
neurons going to output neurons. So how does it work? Well, we recently built the world's
most accurate model of the retina. It could reproduce two decades of experiments on the
retina. So this is fantastic. We have a digital twin of the retina, but how does the twin
work? Why is it designed the way it is? So where in your brain is a violation of Newton's first law first detected?
The answer is remarkable. It's in your retina.
There are neurons in your retina that will fire if and only if Newton's first law is violated.
So does our model do that? Yes, it does. It reproduces it.
But now there's a puzzle. How does the model
do it? Well we developed methods, explainable AI methods, to take any given
stimulus that causes a neuron to fire and we carve out the essential sub-circuit
responsible for that firing and we explain how it works. We were able to do
this not only for Newton's first law violations, but
for the two decades of experiments that our model reproduced. And so this one model reproduces
two decades worth of neuroscience that also makes some new predictions. This opens up
a new pathway to accelerating neuroscience discovery using AI. Basically, build digital
twins of the brain and then use explainable AI to understand
how they work. We're actually engaged in a big effort at Stanford to build a digital
twin of the entire primate visual system and explain how it works. But we can go beyond
that and use our digital twins to meld minds and machines by allowing bidirectional communication between them.
So imagine a scenario where you have a brain,
you record from it, you build a digital twin,
then you use control theory to learn neural activity patterns
that you can write directly into the digital twin to control it.
Then you take those same neural activity patterns
and you write them into the brain to control the brain. In essence we can learn the language of
the brain and then speak directly back to it. So we recently carried out this
program in mice where we could use AI to read the mind of a mouse. Now we can go
beyond that. We can now write neural activity patterns into the mouse's brain so we can make it hallucinate
any particular percept we would like it to hallucinate.
And we got so good at this that we could make it reliably hallucinate a percept by controlling
and only 20 neurons in the mouse's brain by figuring out the right 20 neurons to control.
So essentially we can control what the mouse sees directly by writing to its brain.
The possibilities of bi-directional communication between brains and machines are limitless
to understand, to cure, and to augment the brain. So I hope you'll see that the pursuit
of a unified science of intelligence
that spans brains and machines
can both help us better understand biological intelligence
and help us create more efficient, explainable,
and powerful artificial intelligence.
But it's important that this pursuit be done
out in the open
so the science can be shared with the world.
And it must be done with a very long time horizon.
This makes academia the perfect place
to pursue a science of intelligence.
In academia, we're free from the tyranny
of quarterly earnings reports.
We're free from the censorship of corporate legal departments.
We can be far more interdisciplinary than any one company.
And our very mission is to share what we learn with the world.
For all these reasons, we're actually building a new centre for the science of intelligence at Stanford.
While there have been incredible advances in industry, on the engineering of intelligence...
now increasingly happening behind closed doors.
I'm very excited about what the science of intelligence
can achieve out in the open.
You know, in the last century,
one of the greatest intellectual adventures
lay in humanity peering outwards into the universe
to understand it from quarks to cosmology.
I think one of the greatest intellectual adventures
of this century will lie in humanity peering inwards, both into ourselves and into the
AIs that we create, in order to develop a deeper, new scientific understanding of intelligence.
Thank you. That was Surya Ganguly at TED AI San Francisco in 2024.
If you're curious about TED's curation, find out more at TED.com slash curation guidelines.
And that's it for today's show.
TED Talks Daily is part of the TED Audio Collective.
This episode was produced and edited by our team, Martha Estefanos, Oliver Friedman, Brian Green, Lucy
Little, Alejandra Salazar, and Tonsika Sarmarnivon. It was mixed by Christopher Fazy-Bogan. Additional
support from Emma Taubner and Daniela Balorizo.
I'm Elise Hu. I'll be back tomorrow with a fresh idea for your feed. Thanks for listening.
When you choose Athabasca University's online MBA program, you'll get more. Experience more flexibility so you can pursue your degree while balancing work and family.
The AU Online MBA is designed to fit individual schedules so you can successfully complete
the program from home, work, or even while traveling.
You don't need to leave work for a month, a year, or even every second Friday.
Choose a more flexible MBA and get more out of your education.
Learn how at Athabaskayu.ca slash flexible MBA.