Big Technology Podcast - Can AI Achieve Consciousness? — With Michael Pollan

Episode Date: February 25, 2026

Michael Pollan is the author of A World Appears: A Journey into Consciousness. Pollan joins Big Technology Podcast to discuss whether AI can ever become conscious and what that question reveals about ...the nature of mind. Tune in to hear a nuanced debate about whether consciousness is computable, where today’s LLMs fall short, and how researchers might actually test machine consciousness in the future. We also cover materialism vs. spirituality, the “hard problem” of consciousness, psychedelic experiences, and the emerging science of plant sentience. Hit play for a thoughtful, surprising conversation that brings the AI consciousness debate back down to earth while opening up some of its strangest possibilities. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Can AIs be conscious? Now that we have a better understanding of the mind, we might have some answers to that question. That's coming up with best-selling author Michael Pollan right after this. Michael Lewis here. My best-selling book The Big Short tells the story of the build-up and burst of the U.S. housing market back in 2008. A decade ago, the Big Short was made into an Academy Award-winning movie,
Starting point is 00:00:24 and now I'm bringing it to you for the first time as an audiobook narrated by yours truly. The big short story, what it means to bet against the market, and who really pays for an unchecked financial system, is as relevant today as it's ever been. Get the big short now at Pushkin.fm slash audiobook, or wherever audiobooks are sold. Welcome to Big Technology Podcast,
Starting point is 00:00:48 a show for cool-headed and nuanced conversation of the tech world and beyond. Today, we have a great show for you. We are going to drill into the depths of artificial intelligence, intelligence's ability to achieve consciousness, and we're going to do it with the perfect person. We have Michael Pollan here. He is the author of the book, A World Appears, A Journey into Consciousness that is out this week from Penguin Press. Michael, great to see you. Welcome to the show. Yeah, good to see you, Alex. So let's just start with consciousness. The first thing that I really
Starting point is 00:01:17 felt while reading your book, because you describe consciousness in many different ways, is just that consciousness is kind of amazing, and we're going to get into whether AIs can achieve it, but just strictly on the human side of things to start. I mean, it is amazing in some ways that, yes, we're here, but we have this awareness of ourselves in the universe that's just kind of mind-blowing the fact that it exists. Wouldn't you agree? Yeah, you know, it's funny.
Starting point is 00:01:43 We don't think about it very often. We go through life thinking it's, you know, it's totally transparent, and the world as it appears to us is as it appears. But in fact, it's all a product of this phenomenon we call consciousness. And in humans, it's particularly complex and wondrous in that we don't just exist like other animals. We know we exist. And that changes us in interesting ways.
Starting point is 00:02:11 So it's funny, you know, it's a universal phenomenon, but many people don't think about it that much. And one of the goals of the book is really to get you to think about it because it is a very precious gift. and it's one that in some ways I think we're squandering. Right, and we'll get into that. I think one of the interesting things is that the questions about consciousness is, you know, if let's say the goal was survival, you know, we could do that all sort of mechanically. But for some reason, we don't just do it mechanically, right? We do it in a way that we have awareness of it as we go along.
Starting point is 00:02:48 Yeah, and that's kind of, you know, that's one of the hard problems. I mean, this idea, you know, most of what the brain does, it does without our awareness, right? It's monitoring the body 24-7. It's adjusting your heart rate, your blood pressure, your, you know, glucose levels. I mean, an amazing amount of things to keep you at the proper homeostatic set point. And we're not aware of this. Yet some at the tip of that iceberg of mind is this area of stuff we are aware of. So why isn't it all automated?
Starting point is 00:03:24 Why wouldn't that have made more sense? Why did some of it have to come into our awareness? And there are various theories about that. You know, one, and I find it persuasive, is that there are certain things that go on for a creature that need to be addressed in a reflective way. In other words, that if you, let's say you have needs that are incommensurate and are incompetent. You're hungry and you're tired. Which should you address first? That kind of stuff would come into consciousness.
Starting point is 00:04:00 I also think consciousness is really helpful in a social situation. When you're dealing with a world that is fundamentally unpredictable, that is to say what other people are going to do at any given time, what other people are going to say at any given time, and you have to be able to imagine yourself into their heads, their heads, we call it theory of mind. And so I think for the fact we live in this intricate social world, being conscious is a huge boon. And having automated things, I don't think you could automate something as complicated as social interaction. Well, I guess we're going to find out
Starting point is 00:04:38 pretty soon because when you think about, I don't know, is it automating or is it not? But we certainly have, if you think about, can computers handle social interaction? The answer is yes, they definitely can and people are are falling in love with them and we can get to that in a minute but yeah um you know to me the the thing that you talk about is is this type of behavior or the type of yeah behavior automatable um i want to ask you the flip side of that question which is is is consciousness computable are we able to break down what consciousness is and then eventually with materials that we have sort of figure out how to build it. Yeah, I don't think so.
Starting point is 00:05:23 I don't think everything consciousness does or is is computable. I think the brain is more analog than digital in many ways. And there is a deep metaphor at work here when we even ask that question, which is, is the brain a computer? And that metaphor is very powerful. And, but I don't think it holds up when you think about it hard enough. And one of the goals in the book is to help people think through something like that. So historically, it's very interesting that whatever the cool cutting-edge technology is of any
Starting point is 00:05:59 moment, we have likened that to the brain. At various times, we've likened the brain to a mill, like a grain mill, to a loom, to a telephone switchboard, to a clock, and now to computers, because they are the cutting-edge technology. but metaphors are Norbert Wiener said this really well that the the price of metaphor is eternal vigilance in other words being really aware not to fall into the trap of equating two things that that you're using as one as a metaphor for the other and I and I think you don't have to look very hard till you see that the computer's brain metaphor breaks down first you don't have in brains the hard separation between heart between software and hard
Starting point is 00:06:46 In computers, you can run the same software on any number of different hardwares. But in brains, hardware and software are absolutely indistinguishable. Every experience, every memory is a physical set of connections in the brain. Your life story has changed your brain in a material way. Your brain and mind is not interchangeable because we grew up with different life experiences. That period of pruning that happens with brains of children happens very differently depending on life experiences. You also have this analogy of neurons with transistors, right? You know, transistors are either on or off, and that creates, that's the basis of computation.
Starting point is 00:07:37 But yes, neurons in the brain fire or don't fire, but they are on a spectrum of intensity of firing, and that's all influenced by chemicals, by drugs, hormones, neurotransmitters. So our neurons bathe in a bath of chemicals that influence their firing rate and intensity and all this kind of stuff. And the third reason, I think, that I don't see consciousness being computed is that consciousness seems to be intimately involved with feelings. And feelings are, yes, they convey information. but I don't think they can be reduced to information.
Starting point is 00:08:19 I think there's a residue and a feeling that is a bodily sensation. And feelings depend on things that I don't think computers have, which is to say mortal bodies that can suffer. Now they're telling us they can suffer, and Anthropic is always worried about hurting the feelings of Claude to a remarkable degree, allowing it to opt out of uncomfortable conversations. But I actually think feelings are completely without weight unless there is human vulnerability,
Starting point is 00:08:53 the ability to suffer, and possibly the fact of mortality. So for all those reasons, I think we're talking about something that can't be conscious, at least as we understand it. There may be something that feels like consciousness, and certainly they're very good at faking us out. I mean, already, as you say, people are falling in love with you. chatbots. Chatbots are, you know, striking up friendships with people. Seventy-two percent of American teens turn to AI for companionship already. But, you know, that's not the real thing. And as much as people in Silicon Valley like to say that the simulation is as good as the real
Starting point is 00:09:36 thing, I don't think that's always true. I don't think a weather simulation will ever get you wet. I think that we I think they're real there is real distinctions between simulation and reality. There was a great part of book. I think it's in the early part of the book where you talk about how you had a teacher who said that you can boil biology down, the human body down to $4 worth of chemicals. And you hated that because you felt that it's very reductive and and didn't fully capture what it meant, what the essence of human is. Yeah, I mean, that's kind of when I realized I was on the team of the humanists instead of the reductive materialists. I mean, this was eighth grade. And yeah, and he thought it was really cool on the first day of chemistry to say, your real value is $4.60. That's what all the carbon and other things you're made of would cost at a chemical supply company.
Starting point is 00:10:31 And I thought, what a fucking idiot. No, I have had a similar experience where I had a friend who's a neurobiologist. And, you know, I'm definitely on the side of like, you know, feelings are real. And she was always like, well, love is just chemicals. And I, you know, yes, in a way it is, but it feels like it's not. But for the purpose of this conversation, let me take that side. Okay. I mean, after all, you know, feelings like you talked about, where do feelings come from?
Starting point is 00:11:04 They come from chemicals. And what's going on with neurons? well, they're storing data and firing. And yes, okay, maybe there's a certain level of complexity or different chemicals that need to hit to cause them to fire. But ultimately, this should be in some way reproducible. It's not like there are god particles inside the brain that we couldn't actually fabricate or data that we couldn't store.
Starting point is 00:11:31 I'm not appealing to magic, but I am appealing to a level of nuance and qualitative distinctions that I think are beyond the ability to digitize. You know, if you read Proust, who is just brilliant at describing phenomena of consciousness, right, feelings, insights, he points out that everything that happens to you is different than what happens to me. And that's because when I look at a rose or a Madeline or whatever it is, I am bringing a lifetime of associations to this.
Starting point is 00:12:10 My memories of what roses are are different than yours. My associations with the smell. It's so layered and complex and specific. There is familiarity. You know, what is familiarity to a computer? And I think we lose track. I think there's a tendency when we're dealing with technological, simulations of things to simplify what they are and lose track of the nuance.
Starting point is 00:12:44 There's a Sherry Turkle is a MIT sociologist that I interviewed for the book. And she's, you know, she says at some point technology allows us to forget what we know about life. And what she's getting at, I think, is that when you have a conversation with a bot or a computer in general, you are reducing or simplifying your notion of what a conversation is. You're leaving out what's going on between us right now, which is acknowledgement, skepticism, body language, all the subtleties of human conversation are stripped away. It's kind of, you know, the paradigm case is the emoji, accepting the emoji as a substitute for emotion. So I think we have to be careful about that when we simplify these phenomenon like machine consciousness,
Starting point is 00:13:38 like, you know, conversations with machines, relationships with machines, what are we doing in the word relationship when we count that, a chatbot and a person's relationship? So I'm just kind of alert to these layers of meaning and significance that attach to everything we touch. And I, you know, know, maybe you get there with compute, but I don't see how. I think it, I use the metaphor of encryptedness, that there is, and William James and Marcel Proust both talked about this idea that, that there is a distance between any two thoughts, any two people thinking any two thoughts that just can't be bridged, except imaginatively through art. So I think we're in a realm that
Starting point is 00:14:30 is beyond the genius of Silicon Valley, such as it is. Which is interesting. And look, I'm not going to spend our time together trying to convince you that today's LLMs are conscious. I don't believe that. Very few people within Silicon Valley do believe that, even though you and I have both had interesting conversations with Blake Lemoy and the former Google engineer who was fired, maybe in part because of that belief.
Starting point is 00:14:56 But I will say that there is, there is, so I accept all your. arguments for now and I think there is an interesting belief within Silicon Valley that this is just a temporary situation i'll tell you something that demis isabas uh the founder of the founder of deep mind and the CEO of google deep mind uh spoke spoke with me about earlier this year he said uh this is something that he spoke with me about and on the google deep mind podcast he said that information is the most fundamental unit of the universe not energy not matter information and i think what that means is his belief is that if you go down to the element the very like foundational level of anything you'll find some form of data or you know that you could
Starting point is 00:15:43 end up manufacturing and uh and end up building from the ground up in a computer scenario in a simulated scenario well that's the kind of worldview that if you you you know grew up in the world of computers would be very persuasive. I mean, I think we have to ask the question, are, is the concept of information a map, or is it the territory? He's saying it's really the territory. He's saying that that is the building blocks of reality. And if he's right, then, yeah, many things follow from that. And he's not alone, by the way. There are other physicists who believe, too, that information is at the bottom of everything. I tend to think it's more map than territory. Explain the map and territory distinction. Well, it's a useful distinction. You know, it's very easy to, when we have a model,
Starting point is 00:16:40 a scheme to describe something, it's very easy to fall in love with the model or the description and overlook the fact that it's representing something that's not going to be exact in the same way a map can't capture everything about the territory it describes. It's a simplification. I think that may be true for information. But what do I know? I mean, you know. Right.
Starting point is 00:17:09 I think that's the point, right? It's a really good argument. Yeah. And guess what? We're going to find out by trying to do this. And the most positive thing I can say about the efforts to design and build a conscious AI, which is going on. in openly and secretly all over the place,
Starting point is 00:17:29 is that it will teach us something about consciousness, because we don't really understand how you generate consciousness out of a brain. So if it turns out you can create consciousness, that will tell us that, yeah, he's right. And information is foundational. Right. And if it's feelings that are foundational,
Starting point is 00:17:53 and they can't be reduced to information, well, then we have a problem. Although there are people building conscious AIs who accept that idea. I profile somebody in the book named Kingston Man who's trying to build a robot because he understands you need a body to be conscious and you need a vulnerable body. So he's actually building a robot with soft, terrible skin loaded with sensors so that this robot can have really bad times and be injured. and he thinks that will produce the kind of feelings that will, you know,
Starting point is 00:18:28 will those be real feelings? He's not even sure. But he's working on that assumption. So I do think, you know, we're kind of stuck in our efforts to crack what's called the hard problem of consciousness. And this effort to build a conscious AI is probably one of the most promising intellectual experiments to help us understand it. Whether it succeeds or fails, I think it's going to teach us something really important. And that's exciting. I know a lot of people worry about, you know,
Starting point is 00:19:04 do we owe moral consideration to a conscious AI? I think, you know, before we worry about the tender feelings of our computers, there are a lot of humans we're not extending moral consideration, too. And so much of the Silicon Valley conversation, strikes me as a way to address fun thought experiments about the future and absolutely ignore what's going on in our world today. Well, let me speak a little bit more about where I think Demis was going with that. Yeah, please. Because, you know, so for him, I think it's not, when we spoke, we've spoken a couple of times about this. It's not that, you know, he thinks Gemini, the Google L.M, you know, is conscious or that's not what he's trying to get at.
Starting point is 00:19:48 I mean, I don't think what he's trying to get up with this, you know, information is the fundamental layer of the universe. I think the point is he's found a way to use AI to decode proteins. The next thing on the path is building a virtual cell. If you can build a virtual cell, you can build virtual organs. If you can build virtual organs and virtual cells, you can start testing various cures to diseases, you know, to ailments, whether they're mental or physical.
Starting point is 00:20:21 And that is sort of the idea of wanting to pursue this. It's, you know, of course, and I'll agree with you. I think in Silicon Valley, there are plenty of insane things that go on. And I'm not, I want to stand up here as a great defender of the universe. I mean, I think we're all going to have a digital twin at some point that will be very useful in diagnosing disease and predicting the outcome of various health situations. I mean, people are working on that now, especially with regard to the microbiome,
Starting point is 00:20:52 but other things too. And I think that that'll be useful. But the interesting question will be whether having built up from the cell, can you then make that leap over the gulf of, you know, biological flesh to subjective experience? And the problem we haven't talked about is, how are we going to know? Because, I mean, as we said,
Starting point is 00:21:15 they already can fool us. They're very good at that. They speak to us in our language in the first person, which is, that was a fateful step that we took without really thinking about it. Was it with Siri? I don't know. It may have gone even earlier than that. But that's a kind of wild idea that we decided, yeah, let's have the computers
Starting point is 00:21:34 talk to us as people. But anyway, so how are we going to be able to test them? The Turing test doesn't work for this consciousness question. It was designed for the intelligence question, which is somewhat simpler. But since they can pretend to be conscious, and some of them are very good at doing that, we're going to need a better test.
Starting point is 00:21:55 And the best one I can think of, and I don't know technically what would be involved in doing it, is training a chat bot on everything but the human conversation on consciousness, nothing about feelings, maybe don't let her read any novels or poetry because that would give it you know a context in which to talk about conscious experience and then engage it in a conversation about consciousness could it could it hold its own under those circumstances yeah i love it i don't know whether that's possible but um or is possible you have to start at the very beginning though you can't remove things apparently from the training set you're going to have to build up from the bottom so i hope someone takes that on yeah I read that in the book and I thought that was like a terrific potential experiment. And, you know, on this question of, you know, how do we know, I think a question came up that is kind of silly, but I'm going to ask it anyway.
Starting point is 00:22:58 Which is, why are we, you know, so precious about consciousness only being ours or only being? And again, like, I'm not arguing that LMs are conscious. but like, you know, you type, you speak with LLM today and you ask it, what are you? I'm a large language model. What are you doing? I'm trying to help, you know, help you in these things. And, you know, it seems to me like, well, what is our obsession with putting this barrier up that, like, only humans can be conscious where, like, if you speak with this thing
Starting point is 00:23:30 about whether it's self-aware, it's clearly self-aware. Yeah. Well, I'm not limiting consciousness to humans. I'm freely, I'm giving it to plants, as you know, as you know, having read the book. So I'm pretty generous in who I'm willing to share our sentience with, if not exactly, consciousness. So I'm not being stingy about it, honestly. And I think that's kind of an interesting phenomenon that I talk about in the book, that we are our definition of the human, what's special about us, which has always been related to our intelligence and consciousness, is,
Starting point is 00:24:07 under enormous pressure today from these thinking machines and possibly feeling machines. And then from all these animals that we're learning are much more conscious than we thought. You know, we always thought we had the monopoly, not just unconsciousness, but toolmaking and culture and language. And one after another, they're falling. So who are we? And are we more like these animals who can feel and are mortal and can suffer? or are we more like these thinking machines which speak our language and can talk to us the way we
Starting point is 00:24:41 talk to one another? So, you know, whose team are we on? I think it's one of the more fateful questions we face as a species. Yeah. And it's also, it's interesting because our answer to that question will lead us in some ways to the way that we handle the machines, the ethics of it. Or the ethics toward the animals, too. Well, unfortunately, I think our record, and I think you noted this, our record on ethics towards animals and humans is going to is poor. But we have evolved in our thinking once we realize what was actually going on inside the minds and souls of some of the creatures on the planet. For instance, and this is the thing that really struck me as I read, Descartes, thought that animals were not feeling that when you beat them up and they howled, they were only mimicking.
Starting point is 00:25:34 And they weren't actually. It was just noise. And so just noise. And then eventually we realized, hey, wait a second. That's wrong. And we shouldn't, you go to jail now if you did what these experimenters were doing to dogs in the past. Although, I guess, not to monkeys all the time. Well, he was dissecting, he was dissecting, you know, dogs and rabbits without anesthesia
Starting point is 00:25:54 because he didn't believe they were conscious. And yeah, he was wrong. And could we be making the same mistake with our machines? Well, some people think we are, and we might. But, you know, the idea we're automatically going to treat them with all this moral consideration because they're conscious. Yeah, that seems... Given the fact we continue to eat animals, we know, we full well know our conscious.
Starting point is 00:26:21 I don't know that we're quite as enlightened as that conversation suggests. So let's talk about one of those efforts that you do write about in your book, one that is known. It's called, I think, Free Energy Fighter, where... These scientists have built an AI that's trying to get back to some form of homeostasis, and they think that because it's trying to do that, it can be conscious. I wasn't very convinced that this was the right approach to take, but I'm curious to hear your perspective on it and what exactly it was. Yeah, so one of the characters I profile in the book is a really interesting neuroscientist
Starting point is 00:26:55 named Mark Soames. And he's from South Africa. He's actually trained as a psychoanalyst. and he is developing a theory of consciousness around feelings. His work grows out of the work of Antonio Demosio, who was really the first in this modern wave of consciousness scientists to make us pay attention to feelings as opposed to thoughts as the basis of consciousness.
Starting point is 00:27:20 And Psalms wrote a really interesting book called The Hidden Spring, and he makes the case that consciousness begins in the brain stem, not in the cortex as people had previously thought. and he and he proves this or tries to prove this with evidence that people who lack a cortex and some people are born without one nevertheless are conscious so the cortex gets involved in consciousness you know cortex is the evolutionary most recent advanced part of the brain very you know it's human more human than other parts and that he says it doesn't really get engaged until late in the process it starts with a feeling let's say of hunger and then the
Starting point is 00:28:01 the cortex gets involved like, well, I'll book a table at this restaurant, you know, at 8 o'clock, and forms images and counterfactuals and all that cool stuff. You would think, and I thought, that since he was so interested in feelings, he would believe that it was impossible to make a machine that had feelings. But no, I was wrong. And he has assembled a team in South Africa. Actually, it's an international team. There are people in several continents working together to develop.
Starting point is 00:28:31 a conscious AI based on his theory. Now, his theory is that feelings arise when homeostatic set points are being violated, and you need to get back to balance. You're hungry, you're tired, you're thirsty, your blood pressure is too high, whatever. But that many of these feelings can be addressed unconsciously. But when you have two feelings that are in conflict, that's when things become conscious. And so he's trying to create a situation, and it's essentially an avatar in a video game right now. It's not about advanced computation.
Starting point is 00:29:11 They're really working in the idiom of video games. What happens when this avatar is both hungry and tired and has to make a decision which to privilege? That uncertainty is where consciousness is born. He defines consciousness very successful. simply as felt uncertainty. And so he's trying to make his avatar experience this felt uncertainty. I asked him, well, would these feelings be real or artificial? And he said, well, they're feelings in the context of the game. So there's simulation. But he said, for the avatar, they're real. So I found this all kind of unsatisfying, interesting, but unsatisfying.
Starting point is 00:30:01 So that's the way he's going about it. I've asked other people who are pretty knowledgeable computer scientists, nobody seemed to think large language models are the way to go toward consciousness. But people envision future models of AI that are very different and combine different modules. And a large language model would just be one module. And as Blake Lemoyne said, you know, Lambda, which was the one he was dealing with, is more than a large language model. It had other
Starting point is 00:30:36 modules, too. But I've talked to people and I've said, well, why would it be useful? How could you monetize consciousness? Why are you bothering except as an intellectual experiment? And some people have said that, well, in the same way, consciousness helps us solve problems in a unique way, having a module that could reflect on itself and would possibly help you get to AGI. And that one theory of consciousness is the global workspace. And the idea there is that there's tons of work, there's tons of things, modules in your brain, going about their business. They compete for attention in this workspace and certain very important information that needs to be
Starting point is 00:31:22 broadcast to the whole brain so it can take action, burst ignites into this workspace, and then that's the contents of consciousness. They feel that you could create an AI that had a similar sort of competition for attention, and consciousness would be useful in that context. So, you know, we'll see. I mean, we could have a bet. Yeah, I mean, again, like, I'm just throwing these arguments. out of here. I want to be able to look at this argument from all different sides. And I think the best
Starting point is 00:31:59 argument against this video game, it's not perfect, but it does the trick was, I think you brought it up that a thermostat basically has a set point and works really hard to get into equilibrium when it's out of that. And so it's not conscious. I think we would all agree, not conscious. Yeah. Although I talk to people who said, well, that's the basement. That's the very, you know, that's the bottom of consciousness. Got to start there. Yeah, you start with the thermostat and build up from there. So I think just to conclude this segment, you know, we've covered a lot of ground.
Starting point is 00:32:30 But the things that I would say is, you know, if you are a believer that machines can be conscious, I think right now clearly it's not there. But a lot of these objections seem to be things that maybe the tech industry, over time, maybe in decades, can get to. This fear of mortality, they can develop a, does it? In fact, we already know there's a desire that they don't oftentimes want their values overwritten. They don't want to be shut off. That's what Lambda said to LeMoyne.
Starting point is 00:33:00 This idea of having this familiarity, right? Exactly. And a familiarity being able to like, well, a long context window will certainly build familiarity with someone. The, you know, the emotions that you see when you're in person with someone. All these things will develop avatars and computer vision, you know, may one day, you know, be able to give them an experience where they're, seeing our reactions as well, whether we want to allow them to or not. I don't know. But I think that it's sort of this work on whether consciousness, what consciousness is, and whether machines can achieve it is certainly going to be, is foundational now and will be very important moving
Starting point is 00:33:40 forward as we start. Yeah, I would just add that questions more and more. Anything's possible, given enough time and work. That's right. So anything, anything, though? I don't know. There's Yeah, but I mean, yeah. Yeah, I mean, it would take a very different architecture, I think. And, you know, there are talk about neuromorphic computers. We also have, we're building brain organelles in a solution. I think they've got a shot at becoming conscious. They're working up from, you know, actual brain cells forming organisms together.
Starting point is 00:34:16 I mean, there's a lot of, a lot of things could happen, definitely. And I'm obviously not talking about the longest time horizon imaginable. No, it's, I think good to have some real science about what we're seeing today, which I think you've provided in large quantities, which is much appreciated. So good. And I should also point out that, you know, I'm not very sophisticated on the topic of computers or AI, that I basically have had to, I wasn't expecting to write about AI in consciousness, but after 20, in ChatGPT kind of burst into awareness, the question started, and Blake Le Moyne, actually,
Starting point is 00:34:55 who put it on the agenda, I think for me and a lot of other people, I realize I couldn't write a book about consciousness without delving into this, and that it was really a very interesting phenomenon. Right. And, you know, I came out where I did, and that may reflect my biases, probably does.
Starting point is 00:35:14 Most arguments do. I mean, seeing the world is made of information is definitely, you can see how that might be the bias of someone steeped in computers. Definitely. Yeah, no, and that's why I think taking those thesis and bringing them outside of the tech world is important. So when I saw the concept of your book come into my inbox, I said, we've got to have a show about this because it's going to be front and center in this world. So on the other side of this break, I definitely want to cover maybe some non-tech stuff.
Starting point is 00:35:48 Maybe about how consciousness in religion might intersect. And you brought up that you started with plants. So we've got to talk a little bit about plants. And we'll do that right after this. Searchlight Pictures presents in the blink of an eye on Hulu on Disney Plus. A sweeping science fiction drama spanning the Stone Age, the present day, and the distant future. About the essence of what it means to be human, regardless of our place in history. The film is directed by Oscar-winning filmmaker Andrew Stanton and stars Rashida Jones.
Starting point is 00:36:18 Kate McKinnon and David Diggs. Stream in the blink of an eye now only on Hulu-on Disney Plus. Sign up at Disneyplus.com. And we're back here on Big Technology Podcast with Michael Pollan. He's the author of the new book out this week. A World Appears, Journey Into Consciousness. I think, again, great book. There's one interesting thing in the book, many interesting things, but one that I seized on,
Starting point is 00:36:43 where you said that I think what is it, the belief in consciousness is an escape hatch for materialism. And that's the other side of the thing that I brought up earlier, this belief that everything is computable. Well, if you believe everything is not computable and everything is not information, then this idea of consciousness is actually quite refreshing because it is something that it just simply doesn't play by the rules of our traditional materialism. You know, people have been trying to do what has worked everywhere else in science, which is reduce phenomenon to mass, you know, to matter and energy. And it's been an incredibly productive strategy, but it doesn't seem to work yet with consciousness. And that the effort to reduce it to
Starting point is 00:37:30 things we know hasn't really worked. And it's a tremendous challenge to scientific materialism, which is sometimes called physicalism, because that framework has not allowed us to understand consciousness. Now again, might it at some point in the future? Sure, we shouldn't rule that out. But I also think we have to, as some consciousness scientists have come to this point of, well, maybe we need to look beyond physicalism. And I profile Christoph Koch, who has been working on consciousness since the late 80s. He was kind of working with Francis Crick, who had won the Nobel Prize for the discovery of the double helers. DNA and he unlocked the mystery of heredity, which is, you know, quite an achievement.
Starting point is 00:38:21 Then he turned his attention to consciousness. He was going to crack that using the same reductive science techniques. He worked with Christoph Koch for many years, and, you know, they were trying to isolate the neural correlates of consciousness. Could they find the neurons in the brain responsible for conscious experience? And Koch realized at a certain point that that really wouldn't explain. anything. That would give you a correlation at best. But subjective experience is subjective, and how can you explain that in terms of anything objective? And this is the hard problem,
Starting point is 00:39:00 what David Chalmers call the hard problem. So it's a unique problem. I think that there is some wish fulfillment that we have something that is immaterial, that therefore might survive, the mortal body. I think behind a lot of people's talk about consciousness is the word soul, even though it's not articulated. But what is the soul? It's also this immaterial essence of us that is indestructible. So I think there's a little wishful thinking around consciousness that maybe it's immortal in some ways. And I don't, you know, I don't go there, but I think a lot of people do. And And we're looking for something that transcends this material world and could it be consciousness? Now, there are theories of consciousness that are not materialist in the sense.
Starting point is 00:39:58 Well, actually, I shouldn't say that. They're materialist, but they stipulate a different matter. I'm thinking of panpsychism, which is the idea, philosophical idea, that everything has some itsybitsy quotient of consciousness to it. every particle, every wave. And somehow, so that consciousness doesn't come into the world, it precedes us. And somehow, yes, these particles come together, these mini-consciousness come together
Starting point is 00:40:28 and create the big consciousness we are. But that combination problem, how you get from conscious bits and pieces to us, is another hard problem. And there are other ideas that consciousness is a universal field that we channel. and that our brains are indispensable, but only in the way that a radio or TV receiver is indispensable.
Starting point is 00:40:50 And that's, you know, a kind of idealism. So all I'm suggesting is our failures to explain consciousness in material terms, given the science we have, you know, makes you think, well, we should at least keep an open mind for some of these seemingly weird and crazy ideas. Right. And I do think that there's, you can't talk about consciousness without the spiritual. In fact, one of the questions I wrote down was if, if we do solve consciousness with computing, does some of the mystery of the world go away? If that becomes computable, and you could even ask it, you can even flip that statement. And you could say, because of the fact that consciousness is so mysterious, the spiritual can exist.
Starting point is 00:41:42 this spiritual can't exist can exist yeah i think to me where a spiritualism would reside in that mystery see yeah but that's a definition of spiritualism that um i'm not sure is is mine um that's that's i it's basically suggesting something supernatural exists to me spiritual experience and this this grows out of my experience with psychedelics is more about transcending the self and and murder with something larger. Now, for some people, that's the divine, and it's something magical, but for other people it's just nature or other people or love. So it depends on your definition of the spiritual, but I do think the hardness of the hard problem nourishes certain kinds of spiritual thinking that we've got something here that cannot be reduced to the usual categories.
Starting point is 00:42:38 And that may well be true. Yeah. Yeah, no, I mean, I think that With spiritualism, it can be a bunch of different sides of the same coin. I guess it could be belief in the supernatural, or you could find it. You know, sometimes that's a way of saying that there is something greater than just the individual. Yeah, that's true. Yeah, something bigger than what we can perceive as individuals. I always find that there's the tension in my mind is between spirituality and egotism. And to the extent you can reduce egotism, whether through psychedelics, but also experiences of art.
Starting point is 00:43:12 experiences of awe, all of which kind of shrink the eye in a way that can feel really good, that that is, to me, the door to spiritual experience. Do you think, so let's just go through this route one more time, do you think that if consciousness is solved or the hard problem is self, what do you think that does to this concept of religion and this concept and spirituality? It depends on how it's solved. It would have a huge impact. So talk through a little bit about what those impacts could be.
Starting point is 00:43:48 It could nourish our sense of an animate world. Let's say something like panpsychism is proven. And then we realized that, oh, my God, consciousness is not a human thing. It's a universal thing. It's in everything. That could nourish a new religious conception. And I say new, but actually more like religion pre- monotheism, right, where everybody was an animist, right? Everything had much more life to it than we believe.
Starting point is 00:44:20 We've kind of knocked that out of, you know, it's the, I think it's the default human perspective to see life everywhere. Children certainly have it, right? They're all animists until we knock it out of them in school. And so you could see a return to a more spiritual or religious world, or if it's solved by, you know, understanding some activity, some behavior of neurons or some emergent property of neurons organized in a certain way, but really prove it, not just say it's emergent, because that's just abercadabra. Then, you know, you could come up with a material explanation that would demystify the world even further.
Starting point is 00:45:07 So I think a lot hangs on that discovery. Absolutely. It would be one of those, you know, identity-changing discoveries. Right. And in fact, this was something, the reason why we have so much study of this now, and we didn't beforehand, is, okay, of course, science has advanced. But as you point out in the book, this was something that was left to the church, this idea of consciousness. science, you know, back when science started making some real breakthroughs said, we'll take everything outside of this. And now that's, that bowl is broken. Yeah, we'll take everything measurable and quantifiable. And the church, you can have everything subjective and qualitative. That was Galileo's deal. And Descartes to some extent. And it was a very pragmatic deal. But it's left us with a science that's ill-equipped to study what they left by the roadside, which is to say subjectivity. And, you know, the interesting thing, too, is can we redefine science in a way that makes it easier to study consciousness?
Starting point is 00:46:08 And there are people who argue with, I talk to philosophers, Evan Thompson, and somebody in particular author of an amazing book called The Blind Spot. And he says the blind spot of science is its inability to deal with lived human experience. And it just doesn't value that. So, for example, it doesn't value the experience of the color. red, which it sees as just a frequency of light and that red is a construct of the mind and therefore we ignore it. No, it's a construct of the mind. That's fucking incredible. Why are we looking at that? And he's basically saying the experience of red and the minds of humans is a phenomenon of nature that deserves as much attention as electromagnetism. By the way, Galileo never should have agreed to
Starting point is 00:46:57 that deal because it ultimately didn't work out very well for him. Yeah, it's true. It's true. It was a good try, but it probably saved a lot of other scientists from trouble. Okay. Well, I'm glad that they turned out okay. Lastly, let's talk a little bit about psychedelics and plants because that seems to be where this emerged from.
Starting point is 00:47:16 I picked up the book and I was like, so did Michael just like do a lot of mushrooms and start talking to plants and then ask what consciousness is? And I wasn't that far off. No, you weren't. One of the inspirations for the book was the experiments with psychedelics I did for how to change your mind. And, you know, I'm not unusual. Lots of people who do psychedelics start, like, having trippy thoughts about consciousness. And the reason is that psychedelics smudge this windshield that's normally perfectly transparent between us and the world that we were talking about at the beginning.
Starting point is 00:47:52 And suddenly you realize there's a windshield. And why is it this way and not that? and you defamiliarize consciousness to yourself. There are other ways to do that. Meditation does that too. Certain experiences of art do that too. Specifically with regard to plants, I did have this experience in which the plants in my garden,
Starting point is 00:48:12 and I had taken mushrooms and I was in my garden in Connecticut, seemed aware. They seemed aware of me. They seemed like they were returning my gaze. They were more alive than they had ever been. And afterwards, I dismissed this as your usual drug-addled, you know, psychedelic insight. But I also thought, well, let's see if we can test this against another way of knowing. Because I talked to people who said, what do I do with psychedelic insights?
Starting point is 00:48:44 Should I believe them? Should I dismiss them? And actually, William James talked about this with regard to mystical experiences. He said, we don't know enough to say whether they're true or false. the challenge is to, one, how useful are these ideas, and two, can you corroborate them with other ways of knowing, i.e. science. So I went down this rabbit hole and found this community of botanists who call themselves plant neurobiologists, in full knowledge that there are no neurons involved, and that they're doing really interesting experiments that show that plants are a lot more intelligent than we thought. And perhaps, also sentient, which I should distinguish from consciousness, because sentience is a kind of more basic kind of consciousness. It's just awareness of your environment, ability to tell positive from negative changes and deal with them. Lots of creatures have that. Single-celled creatures have that.
Starting point is 00:49:43 And it may be just a property of life. And consciousness is how we do sentience. You know, we've elaborated it in various ways that we've discussed. So, You know, I learned about some incredible capabilities of plants that they can see so that they're vines that actually change their leaf form to mimic the leaves of the plant they're colonizing. They can hear. If you play the sound of caterpillars munching, they will produce chemicals to repel those caterpillars and also alert other plants in the neighborhood. They can recognize self from other in a pot. and they'll share nutrients with related plants in a pot and not with other plants that are not the same species
Starting point is 00:50:31 but not can. I mean, they can hear the sound of water in a pipe and we'll send their roots over there to see if they can crack in. They're incredibly capable and intelligent and they're not doing everything automatically by any means. The other thing that kind of blew my mind was you can anesthetize plant.
Starting point is 00:50:52 That was the cry. I was going to say you can put them under anesthesia was the thing that literally... Now, what does that mean for a plant? Well, if you're, if you have a, you know, a snapping, you know, a plant, a carnivorous plant or a sensitive plant, you know, things that have a behavior you can see, the behaviors will not happen under anesthetic. And the same chemicals that put us out, which, by the way, we don't understand how they work on us, too. Some of them are, are totally inert chemicals like xenon gas that shouldn't. react with us at all, but somehow put us out. So if a plant has two states of being awake and asleep, then, you know, you can say it is like something to be that plant when it's awake that's different than what it is when it's asleep. At least that's the argument. And it's a tough one to refute. So, you know, I'm not ready to say plants are conscious, but I think sentience that I think we can make that case. And I think maybe that's a property of all living things. Yeah, the last thing I'll say here and then we can wrap is just the thing that really struck me was you,
Starting point is 00:52:04 you talked about how plants, if you watch them sped up, they can show real intent. For instance, like that there's a bean sprout. That doesn't just kind of flail about to try to find something. It sees a branch and it makes a beeline for it twisting like a whip effectively. And you told this story where I think this is, there's an alien civilization that comes down to humans, but they're just real sped up. So we're moving so slowly they feel they can do whatever they want to us. Exactly what we do to plants. It's just a matter of perspective of speed. Yeah. So we all, you know, every creature lives in its own dimension of space and time. And we live in a different dimension of time than plants. They're very slow from our point of view. And therefore, we don't
Starting point is 00:52:45 give them a lot of credit. But as this, as this story makes clear, another species, another alien species could look at us. And if they were sped up the way we are relative to plants, they would basically smoke us and turn us into jerky for the ride home. Well, I hope they're on our same speed. And we can just be friends with the aliens. Like we might be with the computers. Who knows?
Starting point is 00:53:13 Let us try. Let's try. All right. The book is A World Appears. Michael, first of all, thank you for taking that mushroom trip. I'm glad it sparked the book and thank you for coming on the show to speak about it. Thank you, Alex. It was a pleasure. It was great. All right, everybody. Thank you so much for listening and we'll see you next time on Big Technology Podcast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.