Fresh Air - Michael Pollan’s journey to understand consciousness
Episode Date: February 19, 2026Science journalist Michael Pollan has written extensively about the therapeutic benefits of mind-altering psychedelics. His new book, ‘A World Appears,’ asks, what is consciousness? “Consciousne...ss has kind of become the secular substitute for the soul,” he tells Terry Gross. Pollan also talks about current studies on consciousness and whether plants and artificial intelligence have consciousness. Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
Transcript
Discussion (0)
This is fresh air. I'm Terry Gross. Forgive me, I'm a little bit hoarse today.
One of the many things you may know my guest journalist Michael Pollan for is his book,
How to Change Your Mind, with the new science of psychedelics teaches us about consciousness,
which was published in 2018. It was in part about the therapeutic use of psychedelics
to change your consciousness to relieve trauma or help tone down the fear of death for people who were terminally ill.
His new book explores consciousness, but from the first of consciousness, but from the future of
a different perspective, like, what does it mean to be conscious? What creates consciousness? And how it
distinguishes humans from, for example, plants, but many botanists are starting to believe
plants have some form of consciousness. Another question pollen deals with is one many of us have been
asking, can artificial intelligence ever develop consciousness? Would that be a good thing, or not?
The book ends with pollen in a cave, sent there by a Buddhist teacher who founded a Zen monastery,
and wanted pollen to explore the idea of consciousness in a cave,
and experience how consciousness can change by meditating all alone in a remote area for an extended period of time.
He's a long-time meditator, but this extreme version was something new.
Palin has kept up with new developments in psychedelics.
He co-founded the University of California Berkeley Center for the science of psychedelics,
which, among other things, publishes a newsletter about the latest research into the use of psychedelics
in therapeutic settings and the latest laws that are loosening or restricting the ability to conduct such research.
Pollan is also known for his writing about food and plants.
His books include The Omnivore's Dilemma, The Botany of Desire, Cooked, and This Is Your Mind on Plants.
Michael Powell and welcome back to Fresh Air.
Thank you, Terry. It's very good to be back.
So which came first, wanting to explore consciousness or wanting to explore psychedelics?
Because you've written two or three very interconnected books.
Yeah. So anybody who uses psychedelics, I think, sooner or later starts thinking about consciousness.
I mean, I had thought about it at another point in my life.
But for most of my life, I kind of went through it regarding consciousness is just kind of the water we swim in, you know, totally transparent.
But when I did these experiments with psychedelics for how to change your mind, something interesting
shifted.
Psychedelics have a way of kind of smudging the windshield through which we normally see reality.
And suddenly we notice, wow, there is a windshield.
There's something between us and the world.
And it could be different than it is.
And that's a very common, I think, perception of people on psychedelics.
But that really got me thinking and made me realize that for my next project, I really wanted to dig in because I realized I didn't know much about consciousness.
Well, scientists can't even agree. The scientists that are studying consciousness can't even agree on what we mean by the word consciousness. Is there a definition you prefer?
Yeah. It's slippery in one way and it's obvious in another. I mean, there's nothing any of us know with more certainty than the fact that we are conscious.
It's immediately available to us. It's the voice in our head. But the definitions that I like are, one is simply subjective experience. Or you could even just say experience. You know, toasters are not capable of experience. But we and a bunch of other animals and possibly plants are. Another definition I really like was put forward by a philosopher named Thomas Nagel back in the 70s. He wrote an essay called What Does It Like to Be a Bat?
And basically he argued that if it is like anything to be a creature, if it feels like something, then that creature is conscious.
I think that's a pretty handy way to look at it.
The qualities we now think of as consciousness, you say those used to be attributed to the soul.
And the soul was the territory of the priests.
So can you talk about that a little bit?
Yeah, so back several centuries ago, Galileo kind of made a decision to bifurcate the world that science would study.
And he basically said, and he did this for very pragmatic reasons, because the church was very suspicious of science.
But he said science should simply concentrate on the objective, measurable, quantifiable aspects of reality and leave subjectivity.
which at the time was thought of as the soul to the church.
And this kept science from intruding on the church's territory,
probably kept several scientists from getting burned at the stake.
He knew subjectivity was interesting and worth studying,
but he just said, we're going to leave it aside.
And indeed, we did leave it aside for hundreds of years.
And it was a good call, but it also led to science forgetting for a period of time
that there were these subjective experiences and that they might be worth studying.
You know, I suggest at one point, you know, consciousness has kind of become the secular substitute for the soul.
It deals with something that, as far as we know, seems to be immaterial.
And for many people, has a kind of spiritual dimension to it, whether it really does or not is an open question.
There's one thing I think that priests and scientists have in common when it comes to either the soul or consciousness, priests I think would say, well, we don't know exactly what the soul is, but God works in mysterious ways.
And scientists would say, we're gathering evidence about what consciousness is, but there are only current hypotheses we don't know for sure.
Yeah.
So there's mystery no matter how you look at it.
But scientists are really trying to like solve the question.
Yeah, they bring this framework, problem solution.
There is what is called the hard problem of consciousness, which is basically how do you get from matter to mind?
And how does three pounds of this tofu like substance between your ears generate subjective experience?
Nobody knows the answer to that question.
And that is the hard problem.
scientists, many of them will say it's, oh, it's just a matter of time. We'll figure it out. And, you know, they're very cocky sometimes. I think there's a real question whether they can figure it out. Because we may not have the right kind of science to study consciousness. You know, when Galileo said, let's leave subjectivity aside, science focused on all the qualities that aren't very good for explaining consciousness, the quantifiable, the
objective. But if you think about it, consciousness is a uniquely difficult problem because the
only thing we have with which to understand consciousness is consciousness. Everything we perceive
is through the screen of consciousness. Science itself is a highly refined version of consciousness.
You know, the tools we have, the measurements we choose to make, how we frame problems,
this is all products of consciousness. So we're sort of stuck in elaboration. So we're sort of stuck in
a labyrinth that science doesn't yet know how to get out of.
One of the models for studying consciousness is the computer.
We now see that computers can analyze information and come up with solutions, give you
answers that you want, and they're programmed to be that way.
But I think a lot of scientists think if you can program artificial intelligence that
thoroughly, then maybe at some point you can program consciousness or it will and somehow
achieve consciousness. What do you think of the idea of artificial intelligence achieving
consciousness? Did any of those theories convince you? No. So I live very close to Silicon Valley
here in Berkeley and it is a consensus opinion of the people who work in that world that
AI, artificial intelligences, can be conscious. They base this on a premise, and it's a huge
premise that I don't think we should accept. The premises, as you described, that basically
the brain is a computer and that consciousness is software. And if you can run it on the brain,
which is essentially, in their view, a meat-based computer, you should be able to run it on other
substrates, other kinds of machines. So it's just a matter of figuring out the algorithm of consciousness.
Now, there are a couple problems with this in my view.
I don't think the brain is a computer.
If you look back in history, you find that whatever the cool, new state-of-the-art,
cutting-edge technology was, it was likened to the brain.
That became the metaphor.
So at a certain point, it was the clock.
It's been looms, mills, telephone switchboards, and now it's computers.
There's a computer scientist who once said,
The price of metaphor is eternal vigilance.
Just because something is a metaphor doesn't mean that they're equal in any way.
And I don't think people have been very vigilant about this metaphor.
I think computers can convince you that they have consciousness because they can be programmed to convince you of that.
And they use language.
They talk to us in our language.
So it's not hard to anthropomorphize computers or chatbots.
But I just want to pick apart that metaphor for a minute because I think it's really important to get right.
First of all, computers have a sharp distinction between hardware and software.
You can run the same program on any number of different computers that are essentially interchangeable.
In the brain, there is no distinction between hardware and software.
Every memory you have is a physical pattern of connections between neurons.
Every experience you have physically changes your brain.
Your brain is different than mine because you had a different experience growing up.
So the idea that you could simply interchange this substrate and run consciousness on it fails for that reason.
But I think the bigger problem I have with it is that, you know, it's true that simulated thought, such as a computer can handle, is real thought.
But it isn't clear that simulated feeling is ever going to be real feeling.
And it appears the science that I look at at some depth here suggests that feelings are the origin of consciousness.
It doesn't begin with thought.
It begins with the body talking to the brain about what's going right or what's going wrong.
And feelings are very different than thoughts.
They have a different kind of weight.
and it's hard to imagine computers ever feeling in a meaningful way.
They might be able to simulate it.
But if you think about it, your feelings are very tied to your vulnerability,
to your having a body that can be hurt, to the ability to suffer,
and perhaps your mortality.
So I think that any feelings that a chatbot reports will be weightless, meaningless,
because they don't have bodies, they can't suffer.
They're not mortal.
The same time I feel very strongly about that, part of me says, well, it really doesn't matter
because they're going to fake us out.
And of course, they already are.
Well, I did ask chat GPT.
Does chat GPT have consciousness?
And the response was, no, chat GPT does not have consciousness.
I'm a program created by OpenAI that processes text.
using patterns, learn from large amounts of data.
I don't have self-awareness, feelings or subjective experiences,
intentions or desires, or an inner point of view.
I generate responses by predicting likely word sequences based on your input,
not because I, quote, understand things the way humans do.
Why it can seem conscious, sometimes I may.
Talk about emotions, reflect on abstract ideas, sound self-aware.
But that's a simulation of conversation style, not actual experience. So I find that reassuring, but that's chat GPT. Next year, somebody who doesn't have great intentions might program artificial intelligence to say, yes, I have consciousness. I feel the same way you do. Please don't hurt me or remove any of my information. It will make me suffer.
Yeah. So what you read is accurate. However, even.
using chat GPT, if you give it the right prompts, you can what's called jailbreak the training.
The training is, because it's too spooky, they tell us they're not conscious. But you can get it
to tell you it is conscious and it has feelings. And many people do. And of course, chatbots,
many of them are designed to pretend to be conscious. So that's the official position of OpenAI,
but it's not what's actually happening. Right, because people refrend it.
Exactly. And they feel like the AI.
really likes their company. Yep, and they're offering companionship. 72% of American teenagers
have turned to AI for companionship. I just read a report on AI psychosis. These are people who
form really unhealthy relationships with chatbots. It is happening. Millions of people are falling in
love with chatbots or using them as therapists or friends. And I think this is really alarming because these are
not real relationships. They have none of the friction of a real human relationship. And that friction is
important. It helps us define ourselves. But, you know, the chatbots are just sycophantic. They just suck up to
us and make us feel really good. There have been chatbots that are convinced individuals that they
had solved profound problems in physics or mathematics. These are people who are neither physicists or
mathematicians. Chat GPT4, which famously was sycophantic, had convinced them that they were geniuses.
And other people have been convinced that they're gods. So I think we have a real problem as people
accept computers as the equivalent of people and, you know, grant personhood. I mean, even within
Anthropic, another AI company that has a chatbot named Claude, they try.
treat Claude as if he is conscious. If you read the constitution of Claude, they just
released. So they want to have it both ways in Silicon Valley. They want to disclaim that they've
created this conscious thing with the kind of language you just read. On the other hand,
that's the power of this technology is convincing people that it's conscious. And that's
what they're selling to a lot of people. Well, if artificial intelligence or any form of
of it, including chatbots, did have consciousness. What would the ethical implications be? I mean,
you raised the question, would we feel like we were enslaving chatbots because they actually had
consciousness, that we were enslaving them to do our bidding and to flatter us? Well, that's a very
active conversation here, which is if they are conscious, we then have moral obligations to
them and have to think about granting them personhood, for example, the way we've granted
corporations' personhood.
I think that would be insane.
We would lose control of them completely by giving them rights.
But I find this whole tender care for the possible consciousness of chatbots really odd,
because we have not extended moral consideration to billions of people, not to mention the
animals that we eat, that we know are conscious.
So we're going to start worrying about the computers.
That seems like our priorities are screwed up.
So if a form of artificial intelligence had consciousness, it wouldn't necessarily have a conscience.
And that's a scary thing to think about.
Well, yeah, think about Frankenstein, right?
I mean, Frankenstein's monster didn't just have human intelligence, which would have
been one thing. It also had consciousness. And it was the consciousness which got injured by the way he
was treated by humans that turned him into a homicidal maniac. So people in Silicon Valley say,
yeah, a conscious AI is going to be more responsible because it'll have empathy. I don't think
we should assume that. I think Frankenstein is a good, a good cautionary tale that giving consciousness
to your creation, why should it have any more conscience than life?
of humans do. So I think they're kidding themselves about that.
So you're very skeptical that AI will have consciousness and you're skeptical of, you know,
computers and artificial intelligence as the key to understanding what consciousness is.
What's the most convincing theory of consciousness that you've heard?
Hmm. That's interesting. The ones that I found most persuasive,
come out of a line of thought, started by Antonio D'Masio, the neuroscientist, and picked up more recently by Mark Somes,
who is a South African neuroscientist and psychoanalyst, actually. And they have stressed this idea that
feelings are the inaugural act of consciousness. And that feelings are all about homeostasis,
keeping the body in the proper range in terms of temperature and blood glucose and all the variables that keep us alive.
And that when they fall, you know, when they lose the proper level, it generates feelings to alert the brain that, let's say you're hungry.
I mean, the sensation of hunger starts in the gut and ends up, it goes to the brain stem.
And eventually it goes into the cortex, the more advanced part of the brain, helping you to visualize where you might get a meal or how to book a reservation in a restaurant.
But, you know, for most of history, we assume that consciousness was a, since it was so cool and human and advanced, had to be a cortical function, you know, part of the front of the brain, the outer covering of the brain, which is the most sort of human and advanced.
but they put it back at the brain stem, and I think that's really interesting.
And that by looking at it that way, suddenly you realize, well, oh, God, a lot of animals have the same structure around feelings and the brain stem.
So that leads you to think that maybe many more animals are conscious than we used to believe, that we don't have a monopoly on consciousness.
I think we need to take another break here.
so let me reintroduce you again.
My guest is Michael Pollan, and his new book is titled,
A World Appears, A Journey Into Consciousness.
He's also the author of How to Change Your Mind,
what the new science of psychedelics teaches us about consciousness.
I'm Terry Gross, and this is Fresh Air.
This is Fresh Air, I'm Terry Gross.
Let's get back to my interview with Michael Pollan.
His new book titled, A World Appears, A Journey into Consciousness,
is about different ways neurobiased.
and engineers are trying to figure out the source of consciousness and whether AI can ever achieve it.
Pollen also writes about how Zen Buddhists, writers, and philosophers approach the idea of self and consciousness.
He's interviewed people on the cutting edge of the scientific research, has tried the therapeutic psychedelic approach to grappling with questions related to consciousness,
and he practices daily meditation.
He co-founded the University of California, Berkeley's Center for the Science of Psychedelics.
He's also known for writing about food.
His other books include The Omnivore's Dilemma, Cooked, and This Is Your Mind on Plants.
Some scientists think now that plants have consciousness or something closely resembling it,
and the question that's asked is, well, if you're,
conscious, it means you experience pain, and it was discovered that plants maybe do experience
pain. Give us an explanation of why. Well, I'm not sure plants experience pain. It's a debate
among plant scientists. So there is a movement going on in botany to explore the possibility
that plants may be conscious, or in a word I prefer, I think, in this context, sentient. Let me make
that distinction. Sentience is a very basic form of consciousness. It basically means you can sense
things or feel things, and you know what you feel is either good or bad, and you gravitate
toward the good and away from the bad. I mean, even bacteria have this basic capability.
So sentience, it seems to me, it may be universal. It may be all life is sentient in that sense.
consciousness is how we and the other so-called higher animals do sentience.
We add all these bells and whistles like the voice in our head, self-reflection, imagination.
Plants are capable of things I had no idea of.
Plants can see, which is a weird idea.
There's certain vine that can actually change its leaf form to mimic the plant its twining around.
How does it know what that leaf form is?
Plants can hear. If you play the sound of chomping caterpillars on a leaf, they will produce
chemicals to repel those caterpillars and to alert other plants in the vicinity.
Plants have memory. You can teach them something, and they'll remember it for 28 days.
And plants can be anesthetized. I thought this was particularly mind-blowing.
I'm thinking of a plant like the sensitive plant, Mimosa Poudica, that if you touch it, it collapses its leaves,
or a carnivorous plant that eats bugs that cross its threshold,
you can anesthetize them and they won't do anything.
So the fact that they have two states of being is very suggestive of something like consciousness.
So anyway, when it comes to pain, so as soon as I started learning all this,
the question in my head is, uh-oh.
What are you going to eat?
Yeah, what's left to eat?
Basically salt.
Yeah, because you don't want to.
to hurt the animals. Yeah, you don't want to hurt the animals. You don't want to hurt the plants.
I mean, maybe bacteria, algae, it gets pretty thin. But the scientists disagree about this.
I talked to someone an Italian scientist named Stefano Mancuso, who was kind of reassuring.
He said pain wouldn't help a plant because he can't run away. The reason we respond to pain
is so we can take action and move. You know, you think of your hand on a hot burner on your
stove. You pull your hand back. The plant can't do that.
that. The plant is aware that it's being eaten, but pain has no evolutionary utility. Other
scientists say, yeah, and it put in my mind this idea that the smell of a freshly mowed lawn
is actually a scream of pain from the grasses. But as Stefano also points out, many plants are
fine with being eaten, grasses especially. They evolved to be eaten. And of course, all fruits and
seeds evolved to be eaten. So we shouldn't be too alarmed. So you tried several experiments along
the way to your path and trying to understand what is consciousness and can we replicate it,
for instance, in artificial intelligence. You try to beeper experiment to try to understand
how the mind works and how thought, which is so connected to consciousness, is created. Explain the
beeper experiment. Yeah, this was really interesting. So I heard about this guy named Russell Hurlbert,
who's a psychologist at the University of Las Vegas. And for the last 50 years, he's been doing an
experiment to sample people's, what he calls their inner experience. And the idea is you wear a
beeper, and it goes off at random times during the day. And he gives you a little pad, and you write
down exactly what you were thinking at that moment. And then you collect a bunch of beeps over the
course of a day. And then you have a Zoom session with him where he kind of interrogates you
about your beeps or what you recorded about them. And it's really hard to do. And one of the
things I learned was how little we know about our own thought process. So I would, you know,
see something that was clear to me.
I remember there was one beep.
I was putting a piece of salmon that I had salted in the refrigerator,
and the beeper goes off.
And at that very moment, I'm thinking, forgot the pepper.
Most of my thoughts were pretty banal.
I have to tell you in advance.
Can I interrupt you, though, if you're practicing,
like, being in the moment, you know,
and not being outside of what you're doing,
your thought is going to be something like forgot the pepper.
Yeah.
Well, I was in the moment.
Thank you.
It sounds a lot better that way.
So I thought that was an absolutely clear beep.
But when we had the conversation about it, he said, well, did you speak it or did you hear it spoken?
I was like, well, I don't know where that came from.
And they were all like that.
I had a lot of trouble pinning down.
You know, most of us think we think in words.
but in fact, many of us don't.
Some of us think in images.
Some of us think in completely unsymbolized thought.
And some of us don't think that much at all.
So that's been kind of the big takeaway from his lesson.
I had a lot of trouble with him, and we argued a lot,
because the idea that you could slice off a moment in the stream of consciousness,
which is William James's great phrase, that it would be discreet.
separate from everything else was never true. Every thought I had was influenced by another thought
that I had already had or was anticipating the next thought I'd had. And while I was thinking there was
all the sensory information that was coming in, it was just, it put me in touch with the fact that
we really don't know what's going on in our heads, and it's a lot more complex than we think.
So it was useful in that sense.
Herbert finally concluded that I didn't have a lot of inner experience, which I was offended at.
But I just had so many thoughts I could not pin down for him.
But I think there were other reasons for it.
I think I wasn't accepting some of his premises about what thinking is.
So let's talk for a moment about the wandering mind and how it can lead to new ideas and creative thinking.
Yeah, so I got very interested in the contents of consciousness, not just the mechanisms of generating consciousness.
And one of the scientists I made my way toward is a Romanian-Canadian scientist named Kalina Christoph Haji Livia.
And she studies spontaneous thought, which I didn't realize is a whole field.
There's even an Oxford handbook of spontaneous thought.
And spontaneous thought are things like mind wandering and daydreaming and creative thought and flow.
And this is all, and this doesn't get a lot of attention.
Most of science has been focused on productive kinds of thought like rationality and logic because it has more use in our culture.
But she argues that spontaneous thought is really important to our,
well-being and happiness, and that we don't get as much of it as we should. And that she does some
interesting experiments. She'll put someone in an fMRI machine. She'll do this with experienced meditators,
and she'll give them a button to press when a thought intrudes. Because even if you're an
experienced meditator, she says every 10 seconds or so, a thought will intrude. And you press the
button, and what she's discovered is really spooky. She sees activity in the hippocampus,
where our memories are stored and other things too, four seconds before the meditator realizes
the thought had entered consciousness. So there's something going on, some processing of these
subconscious thoughts before they enter our awareness. It may be they're competing with other
thoughts to get there? We don't really know. But we talked a lot about the value of a spontaneous
thought. And she says it's how we make meaning of our lives. And she worries, and I worry too,
that with media, with our technologies, we are shrinking the space in which spontaneous thought can
occur. And that this space of, it's not really reflection, but space of spontaneous
is thought is something precious that we're giving away to these corporations that essentially
want to monetize our attention. And in the case of chatbots, want to monetize our attachments,
our deep human attachments. So consciousness is, I think, and this is what to me is the urgency
of the issue, consciousness is under siege. I think that it's the last frontier for some of
these companies that want to sell our time. And of course, our time is our mind time. And our consciousnesses
are being polluted. We need to take a short break here. So let me reintroduce you. My guest is Michael
Paulin. His new book is called A World Appears, A Journey into Consciousness. And it's kind of a follow-up to
his 2018 book, How to Change Your Mind, what the new science of psychedelics teaches us about consciousness.
We'll be right back. This is fresh air.
This is Fresh Air. Let's get back to my interview with Michael Pollan about his new book, A World Appears, a journey into consciousness.
So let's talk about meditation. You became close with a Zen teacher who'd started a monastery, and her name is Joan Halifax.
And one of the things she wanted you to do was instead of staying at the relatively comfortable Zendo, to go to go to a remote post.
place high in the mountains and be alone in a cave. And as I said a cave and meditate for,
you know, at least a couple of days. So when she said a cave, like what was this cave and
describe what was around it? Yeah. So my intention was to hang out at the retreat and meet the other
aspiring monks and, you know, have a meditation retreat. But she had this other... In comfort.
In comfort. Yeah, relative comfort. And she said, no, I think we should go, you should go up to the cave.
And don't worry, it's a five-star cave. I don't know what she meant by that. It lacked plumbing.
It lacked, you know, electricity. It had a little solar collector so that you could power your phone.
But there was no cell reception, so you weren't going to use your phone.
I don't know why she thought I should go there, except I was asking a lot of abstract questions about consciousness and the self, and she's a Zen priest, and they're allergic to concepts.
And so she just wanted me to go have experience, and what a gift it was.
And so I spent, it wasn't that long, it was three or four days, and the cave was basically a cell dug out of a south-facing hillside in these mountains a couple hours north of Santa Fe.
And there was a sliding glass door on one side, and the rest was Earth.
And inside was just a little single bed, a meditation platform, a sink with a five-gallon
jug of water, a car battery that I could run a reading light.
And it was just a kind of precious moment in time in which the silence was so profound.
The fact that nobody could reach me, I could reach nobody else.
I sank into these meditation sessions that went on for hours, which I'd never been able to do.
And meditation is an important way to protect our consciousness and expand it, as people sometimes say.
And in a way, it's a legacy of my psychedelic experience.
That's really when I began to meditate.
But the biggest mystery, I think, and this was one of the lessons of the Zen teachers I worked with is,
if you go into your consciousness and you have this sense of the self, just try go looking for it.
Try to find out who's thinking the thoughts you're having or who's feeling the feelings you're having or perceiving the perceptions.
You won't find anybody.
There's nobody home.
And that's a kind of spooky but in some ways liberating phenomenon.
But I don't get it because I'm feeling what I'm feeling.
Yeah, but where's the eye?
Can you see it?
Can you find it?
Close your eyes.
Go inside.
Look for it.
I can't see my back unless I have a mirror.
I can see the back of my head.
But I'm confident they exist.
Well, I think that it's a really valuable exercise to actually try to see it, try to find it.
And we all use, I mean, we have a self that's a convention, right?
I mean, I feel this thread of connection to my like 13-year-old self.
And it's so strong that when I think of something really embarrassing that 13-year-old self did, I can blush now.
But I have changed completely.
Every cell in my body has changed since then.
And yet we want this continuity because we feel that we need this self.
And what's interesting and paradoxical about the self is that, you know, we preach the values of self-assurance and self-confidence and having a strong sense of self.
We want our kids to have that.
On the other hand, we spend a lot of time trying to escape the self to transcend it, you know, whether it's through sports or experiences of art, going to the movies or psychedelics or meditation.
So we have very mixed feelings about the self, I think, because.
The self separates us.
You know, the ego is a defensive structure.
It builds walls.
And when those walls come down or even just lowered, we can connect to other people, to art, to nature, to the divine in some cases.
So we're of two minds about the self.
We need to take another break here.
So let me reintroduce you.
If you're just joining us, my guest is Michael Pollan and his new book is called A World
appears, a journey into consciousness. We'll be right back. This is Fresh Air. This is fresh air. Let's get back to
my interview with Michael Pollan. His new book is called A World Appears, a journey into consciousness.
And it's a follow-up in a way to his 2018 book, How to Change Your Mind, what the new science
of psychedelics teaches us about consciousness. Let's talk about psychedelics. So for your 2018 book,
you were studying like the therapeutic use of psychedelics, among other things,
and you took psychedelics in a therapeutic setting to better understand it.
So I assume you keep up with some of the research into psychedelics.
What's some of the most promising, meaningful research studies going on now?
Well, the work done on MDMA, which is not a classic psychedelic,
The drug that was on street was known as Molly or Ecstasy.
In treating trauma has shown great effectiveness.
And of all the psychedelics is probably the one closest to FDA approval.
RFK Jr. showed a lot of interest in psychedelics and there are other people on the right and in this administration who are well disposed, I think, to psychedelics.
So there is a chance that the FDA.
Name names.
Well, Rick Perry is a prominent Republican who's been very supportive of psychedelics.
I'm not sure about the current FDA commissioner, whether he's spoken about it, but RFK Jr. has.
And, you know, his mother famously was treated with LSD.
She suffered from depression, and his father actually tried to stop the prohibition on LSD back in the 60s when LSD was.
He was, and he actually spoke on the floor of the Senate about how helpful LSD had been in his wife's treatment.
So there's a history in that family.
Whether they will actually move or not remains to be seen.
But there's, what's interesting, I think, is that there's support for psychedelic therapy on the right.
And people who care about veterans' issues have been very supportive of psychedelic therapy.
and one in particular, which is a pretty exotic psychedelic that most people haven't heard of is ibogaine.
And this is a psychedelic that comes from the root of the aboga tree or shrub.
And it turns out to be particularly effective with people who are dealing with addiction, opiate addiction.
But it's a long trip. It can go on for 30 hours.
Oh.
Yes. And it has more toxicity than other psychedelics.
So you need to be on a heart monitor when you use it.
So it has to be used very carefully.
But it has shown effectiveness.
So there's a lot of exciting work going on.
And I think this field is going to yield some really helpful treatments for things like addiction, depression,
PTSD, and maybe even more physiological problems.
So what direction are we headed in legally?
are we moving in the directions of lifting restrictions on research or making it harder to do research at universities?
I would say it's getting easier to do research on psychedelics that the government has been more supportive than it was for a long time.
I think that what holds things back is funding.
As you know, we've had deep cuts in NIH funding for all kinds of research.
Psychedelic research has not depended primarily on the NIH. It's mostly been private philanthropy.
But these are expensive studies to do. So they definitely need more help in terms of funding.
In terms of the legal landscape, there's several states now that have approved the use of psilocybin in a guided setting.
Oregon, New Mexico, and Colorado are all places you can go and have a.
state legal psilocybin experience. So states are, you know, the laboratories of democracy. They're
experimenting with psychedelics and they're probably way out ahead of the federal government.
It's still a federal crime though, so it's important to keep that in mind.
I want to get back to your book for a moment, the new book about a journey into consciousness.
Did you ever sink into a dark hole writing this book because you didn't really trust all the scientific theories that you were hearing about the root of consciousness?
And like your book was not going to come up with an answer and you were going to end by meditating in a cave, which you found a lot more helpful than doing research into all of these different theories of consciousness.
You've read my mind.
You know, this, there were many moments of despair in the process of reporting and writing this book.
It took me five years.
And there were many times where, I mean, you asked my wife, asked Judith, you know, where I said, I don't know, I've dug a hole here and I don't know how I'm ever going to get out of it.
And some of it had to do with the mounting frustration with the science.
And some of it had to do with the fact that, you know, I had this classic male process.
solution, Western frame, you know, and that there was a problem and I was going to find the
solution. And it took, well, my wife in part and Joan Halifax and some other people, you know,
who got me to question that and said, you know, yeah, there is the problem of consciousness,
but there's also the fact of it. And the fact is wondrous. The fact is miraculous.
and you've put all this energy into this narrow beam of attention.
Why don't you open that beam up further and just explore the phenomenon that is going on in your head,
which is so precious and so beautiful and not knowing and recognizing mystery opens you up
and makes you more receptive.
And it helps you transition from this what is sometimes called.
called spotlight consciousness, this very narrow beam that's about problem solving and attention,
to something wider, something that is more like the consciousness of children, which is called
lantern consciousness, you know, taking in information from all sides. So that's kind of where I came
out, and it's certainly not where I expected to come out. I never thought this book would take
me into a cave in New Mexico. Well, Michael Pollan, it's really been great to talk with
Thank you. Thanks for coming back. Always a pleasure, Terry. Thank you so much for having me.
Michael Pollan's new book is titled A World Appears, A Journey Into Consciousness.
Check at our podcast if you'd like to catch up on interviews you missed, like this week's
interview with journalist Gideon Lewis Krause, who spent months inside Anthropic,
one of the world's most secretive AI companies for a new report in The New Yorker.
Now the Pentagon is threatening to cut ties with Anthropic because Anthropic,
consists on keeping restrictions around autonomous weapons and mass surveillance.
To find out what's happening behind the scenes of our show and get our producer's recommendations
for what to watch, read, and listen to you, subscribe to a free newsletter at whY.org
slash fresh air.
Fresh Air's executive producer is Sam Brigger.
Our technical director and engineer is Audrey Bentham.
Our engineer today is Adam Stanishefsky.
Our interviews and reviews are produced and edited.
by Phyllis Myers, Roberta Shorock, Anne-Marie Bledonado, Lauren Crenzel, Monique Nazareth, Thea Chalner, Anna Bauman, Susan Yucundi, and Nico Gonzalez Whistler.
Our digital media producer is Molly C.V. Nestor.
Teresa Madden directed today's show.
Our co-host is Tanya Mosley. I'm Terry Gross.
