TED Talks Daily - Why AI is unlikely to become conscious | Anil Seth
Episode Date: May 1, 2026We see consciousness in AI the same way we see faces in clouds, says neuroscientist Anil Seth. He explores the all-too-human tendency to project inner life onto machines that are brilliant mimics, not... sentient beings, and gives a definitive answer to the urgent question: Will AI ever gain consciousness?Learn more about our flagship conference happening this April at attend.ted.com/podcast Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
You're listening to TED Talks Daily, where we bring you new ideas to spark your curiosity every day.
I'm your host, Elise Hugh.
We see consciousness in AI the same way we see faces in clouds, says neuroscientist and Neil Seth,
an all-to-human tendency to project inner life onto machines.
Just because consciousness and intelligence go together in us does not mean that they go together in general.
The assumption that they do, well, that's a reflection of our own psychology,
not an insight into the nature of reality.
We are built to be seduced, like narcissists, by our own reflections.
And so we see ourselves in our algorithms.
In this talk, Seth makes a careful case against conscious AI
and explains why mistaking a sophisticated mirror for a mind
could reshape ethics, power,
and what it means to be human in ways we are not prepared for.
The AI we have is already smart, at least in some ways,
but could it ever be conscious?
Will a robot ever gazed at a sunset and experience the beautiful colors,
the reds and the oranges, will it feel a sense of beauty or a rush of joy?
That's coming up right after a short break.
And now our TED Talk of the Day.
So for centuries, people have fantasized about playing God
by creating artificial versions of ourselves.
From Mary Shelley's Frankenstein to Howe in Stanley Kubrick's 2001
and Ava in Alex Garland's ex-Machina.
This is a dream reinvented with every breaking wave of technology,
and with AI, the wave is a big one.
The AI we have is already smart, at least in some ways,
but could it ever be conscious?
Will a robot ever gaze at a sunset
and experience the beautiful colors,
the reds and the oranges?
Will it feel a sense of beauty or a rush of joy?
Or will computers, however smart they get,
always remain dark on the inside,
always an object and never a subject.
Whether AI can be conscious
is one of the most consequential questions we face in our time.
If computers can be conscious or sentience or aware,
we'd be entering a new era in human history.
We'd have new entities that have their own inner lives,
new inventions that matter for their own sakes
and not only for their effects on us.
Conscious AI might suffer the click of a mouse,
perhaps in ways we wouldn't even recognize.
And if silicon can be sentient,
then maybe our messy flesh and blood bodies
will soon be superseded by machines that never age and cannot die.
Now, over the last few years, progress in AI has been simply astonishing,
and who knows what's around the corner.
Many experts think that conscious AI is possible.
Quite a few think it's inevitable,
and some, some, think it's here already.
I think they're wrong.
I want to tell you why and why this matters so much.
So I've been studying brains, minds and consciousness
for nearly 30 years,
and one thing I've learned is that to answer the question,
can AI be conscious?
We need to start by looking within ourselves
at the makeup of our own human minds.
Now, we humans, we tend to see the world in our own terms.
We know we're conscious, and we like to think that we're intelligent,
So we think the two go together.
And this is why some people think
that consciousness might just glimmer into existence
as AI gets smarter and smarter.
But consciousness and intelligence are different things.
Intelligence is all about doing.
It's solving a crossword puzzle,
assembling some furniture,
navigating a tricky family situation.
Consciousness, on the other hand,
it's all about feeling and being.
It's the difference between normal wakeful aware
and the oblivion of general anesthesia.
It's the bitter tang of coffee.
It's the warmth of a log fire,
the joy of seeing a loved one.
Just because consciousness and intelligence go together in us
does not mean that they go together in general.
The assumption that they do,
well, that's a reflection of our own psychology,
not an insight into the nature of reality.
Take language models like Claude or GPD,
trained on vast quantities of written texts.
They reflect back to us an image of ourselves
of our collective digitized past.
We talk about ourselves endlessly, and so do they.
We wonder about consciousness and the meaning of it all.
And so it seems, do they?
But language models are not conscious.
They simulate consciousness.
We project consciousness into them
in the same way we might project faces into cloud,
or even the image of Mother Teresa in a cinnamon bun.
We are built to be seduced, like narcissists, by our own reflections,
and so we see ourselves in our algorithms.
I'm always struck that nobody really worries
whether deep minds alpha-fold is conscious.
Alpha-fold predicts the structure of proteins
rather than words and sentences,
but under the hood, it's not much different from Claude or GPD.
Algorithms, running on silicon and trained on vast reservoirs of data,
alpha-fold just doesn't pull our psychological strings in the same way.
So if we think that Claude is conscious, but Alpha-Fold isn't,
that says more about us than it says about AI.
But how can I be so sure those systems like Claude or GPT aren't conscious?
Well, nothing's for certain sure when it comes to consciousness,
but the very idea of conscious AI depends on a deeper assumption,
a kind of myth, really.
And this is the myth that the brain is a computer
that just happens to be made of meat rather than metal.
Now, consciousness in this story is a special algorithm,
a collection of computations,
that just happens to be carried out in the wetware of the brain in ours,
but which could equally be carried out in silicon in AI.
But the computer is just one in a long line of technological metaphors
that we've reached for
when trying to understand the deep complexity of the brain.
One time, the brain was a system of plumbing.
Later, it was a telephone exchange.
And for the last few decades, it's been a computer.
And this most recent metaphor has been extremely powerful,
but it is still a metaphor.
And we will always get into trouble
when we confuse a metaphor with the thing itself,
the map with a territory.
For one thing, in a real brain,
there's no sharp separation between the mindware
and the wetware,
unlike the separation that you get
between software and hardware
in a computer. And this
really matters because in a computer, you
can describe and understand everything
about an algorithm, whether it's a language model
or a word processor, without
worrying about all the silicon shenanigans
going on underneath. The computation,
the algorithm, is all that matters.
But for brains,
you just cannot separate
what they do from what they are.
And this means that what they
is unlikely to be a matter of computation, of algorithm alone.
Look closely at a brain, at any brain,
and it becomes less and less plausible
that all that's going on is just turning some numbers into other numbers
in this endless dance of zeros and ones.
Yes, there are neural circuits which exchange signals
and may do computation or at least something like it,
but there's so much else that escapes the confines of the digital.
Neurotransmitter chemicals course through the brain circuitry,
electromagnetic fields sweep through the cortex like weather systems.
Even a single neuron is such a beautiful biological machine,
a far cry from the simplified cartoon-like neurons that power today's AI.
The brain is not, or at least not just a computer made of meat,
and so consciousness is very unlikely to be a matter of computer.
and if this is true,
then conscious AI is off the table,
at least for AI as we know it today.
Let me put it another way.
What if we simulated every last detail about the brain
in some massive supercomputer?
Now, if the fine details of the brain do matter for consciousness,
well, wouldn't this be enough for consciousness to happen inside a machine?
Well, a computer simulation of a hurricane does not create real wind.
A computer simulation of a black hole doesn't suck the earth
into its algorithmic singularity.
Making these simulations more detailed can make them more useful,
but it does not make them any more real.
We can have a simulation of the brain,
and you can make it as detailed as you want.
This might make it more useful,
but it's not going to make it any more conscious.
Now, consciousness has to be said, does remain a bit mysterious,
but perhaps one reason for this,
is that we've been so constrained by our metaphors,
by the idea that it just has to be some kind of information processing.
After all, if you think the brain literally is a computer,
then what else could it really be?
But once we see the brain more clearly for what it really is,
many new possibilities arise.
And my own view developed over many years
is that consciousness is intimately connected to our nature
as living creatures.
Unlike the abstract universe of computation,
life is all about materiality.
Unlike algorithms, living systems
are deeply embedded in flows of energy and matter,
and they continually regenerate their own conditions
for existence and for persistence over time.
I think we can draw a direct line
from the molecular furnaces of metabolism,
one billion biochemical reactions in every cell,
in every second,
all the way to the neural circuits that underlie each and every experience that we have,
whether it's the sight of a blue sky or a pang of envy.
Every conscious experience is imbued however subtly with a tinge of aliveness,
with some core relevance for our future survival prospects,
and at the heart of every experience, beneath even emotion,
is this simple, shapeless and formless but fundamental feeling of being alive.
And in this story, it's life, not computation that breathes the fire into the equations of experience.
And if this is right, then conscious AI will need to be living AI.
Let me bring things together.
First, we're built to see consciousness where it isn't, thanks to deep-seated psychological biases
that bundle language, intelligence and consciousness together.
Second, the brain is not, or not just a computer.
So consciousness is unlikely to be a matter of computation of algorithm alone.
Brains are the kinds of things
for which you can't separate what they do from what they are,
and silicon is not up to the job.
And third, many other things about our biological brains and bodies
might matter for consciousness,
including a deep connection between consciousness and life.
Artificial intelligence is computer software.
It is not a living mind.
It might give the impression of being conscious,
but it is vanishingly unlikely that it actually is.
I want to close by returning to why this matters so much.
Take the idea of AI welfare.
Now, there are already influential groups advocating
that AI systems should have their own rights,
based mainly on the idea that they might be or become conscious.
Now, if real artificial consciousness is on,
maybe through some other technology or other pathway, then this is entirely justified.
We humans have a terrible track record in our ethical treatments of non-human animals and of other
humans, and we don't want to make the same mistake again. And this is one reason why even
trying to build conscious AI is a very bad idea. But if conscious AI is just an illusion
created by design, as I think it is, then by extending rights to these systems, we'd be
sacrificing our ability to control, to regulate them,
and perhaps even to turn them off,
and for no good reason at all.
And this is one reason why even AI that merely seems to be conscious
is very dangerous for our society too.
And unlike real artificial consciousness,
conscious-seeming AI is either already here or coming very, very soon.
There are other reasons we should avoid creating AI that seems to be conscious.
It makes us more psychologically vulnerable.
We might be more likely to do what an AI says
if we believe that it really feels for us,
that it really understands us,
even if what it's telling us to do is very bad for us.
And finally, the very idea of conscious AI
undermines our human nature.
The mirror of AI goes both ways.
We see ourselves in our algorithms,
but we also see our algorithms in ourselves.
And when we do, when we think of the mind as a collection of computations
floating free from their basis in biology,
we reduce and we diminish what it is to be a living, breathing human being
in a real world.
Frankenstein, which Mary Shelley wrote when she was just 19,
it's often taken as a cautionary tale,
a warning against the hubris of bringing something to life,
a Promethean sin, like stealing fire from the gods.
The idea of conscious AI is a new Promethean dream,
wrapped up in a silicon rapture.
And if conscious AI is possible,
then so is the prospect of uploading our own conscious minds
and floating off to eternity in a silicon cloud of living
or at least existing forever in the pristine circuits of some future supercomputer.
Now, the seductive power of this vision
of being at such a pivotal point in the history of life on Earth.
I suppose it's understandable.
And back in the real world, talk of conscious AI does other things too,
conjures this sense of technological wonder and magic
which might keep share prices aloft and regulators at bay.
But we should resist.
The sacrament of the algorithm is most likely an empty dream,
delivering not post-human paradise, but silicon oblivion.
We need a different story,
one in which we're more part of nature,
not apart from it,
with consciousness more closely tied to living flesh and blood,
not to the dead sand of silicon.
AI might claim the prize of intelligence,
at least in some ways,
but consciousness,
consciousness remains ours to celebrate
and to share with other living creatures.
So let's not sell our minds so easily
to our machine creations
If we do, we not only overestimate them, we underestimate ourselves.
Thank you.
Thank you very much.
That was Neil Seth speaking at TED, 2026.
If you're curious about TED's curation, find out more at TED.com slash curation guidelines.
And that's it for today.
TED Talks Daily is part of the TED Audio Collective.
This talk was fact-checked by the TED Research Team and produced and edited by our team,
Martha Estefanos, Oliver Friedman, Brian Green, Lucy Little, and Tonica, Sungmar Nivong.
This episode was mixed by Christopher Faisi Bogan.
Additional support from Emma Tobner and Daniela Balareso.
I'm Elise Hugh. I'll be back tomorrow with a fresh idea for your feed.
Thanks for listening.
