Your Undivided Attention - Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

Episode Date: January 18, 2024

We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... ma...gical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. Correction: Josh says the first telling of "The Sorcerer’s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.RECOMMENDED MEDIA The Emerald podcastThe Emerald explores the human experience through a vibrant lens of myth, story, and imaginationEmbodied Ethics in The Age of AIA five-part course with The Emerald podcast’s Josh Schrei and School of Wise Innovation’s Andrew DunnNature Nurture: Children Can Become Stewards of Our Delicate PlanetA U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animalsThe New FireAI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive orderRECOMMENDED YUA EPISODES How Will AI Affect the 2024 Elections?The AI DilemmaThe Three Rules of Humane TechAI Myths and Misconceptions Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

Transcript
Discussion (0)
Starting point is 00:00:00 Hey everyone, this is Aza. And this is Tristan. Welcome to Your Undivided Attention. And today we're going to do a very different kind of episode on Your Undivided Attention. You know, we're all interested in how does AI and humanity go well. And we often talk about solutions to that. You know, technical solutions like dealing with the chips or the training data that gets used to make AI or policy solutions.
Starting point is 00:00:26 But there's this kind of deeper question, which is, what is the drive to make AI in the first place? And what solutions would be enough when the drive behind building it is almost religious or mythical in nature? Now, a lot of you might think this is sounding a little bit weird for here undivided attention, but a friend of CHTs kind of went behind the scenes and talked to a lot of the major players at the AI labs. And he wrote me an email about the conversations that he's been having, summarizing why
Starting point is 00:00:53 everyone at the end of the day is building this. Here's what he said. In the end, a lot of the tech people I'm talking to, When I really grill them on it, they retreat into number one, determinism. Number two, the inevitable replacement of biological life with digital life. And number three, that being a good thing anyways. And he goes on to say that these AI leaders have an emotional desire to meet and speak to the most intelligent entity they've ever met,
Starting point is 00:01:21 and they have some ego-religious intuition that they'll somehow be a part of it. It's thrilling to start an exciting fire. They feel they will die either way, So they'd prefer to light it and see what happens. Now, the quote I just read does not, I think, describe why everyone is pursuing AI. In fact, I'd say most people are not driven by this. But there's a handful of people at the kind of core center of some of the frontier AGI development that I think do have this psychology.
Starting point is 00:01:48 In fact, they've said so publicly. So if this is the psychology and motivation driving AI development, maybe in terms of solutions, we need to consider these AI questions through another lens. the lens of myth and our insatiable curiosity to seek the unknown and what seduces us by the possibility of new godlike powers our guest today is josh shry the creator and host of the emerald podcast which explores topics like psychology ecology and technology through the lens of mythology he's a writer and teacher as well as a lifelong student of world mythologies and traditions specializing in the myths of the Indian subcontinent.
Starting point is 00:02:30 Last summer, Josh put out a two-hour episode of the Emerald called So You Want to Be a Sorcerer in the Age of Mythic Powers, a.k.a. the AI episode. And both Tristan and I found it deeply moving. This episode looked at how our trajectory with AI is entwined with cultural myths about magic and spirituality. And it provided a perspective that we believe has been sorely missing from the conversation around AI,
Starting point is 00:02:54 and it gets beyond the hype of the moment and the drama of the race to ask profound questions about where AI fits into the larger human story, the narrative that has been unfurling for millennia, not what we must do, but who we must be. So Josh, welcome to your undivided attention. Thanks. Thanks for having me. It's great to be here. So, Josh, I want to start by asking you, as an expert on myth, why is myth useful in helping us think through the promise and peril of AI in this moment
Starting point is 00:03:23 where we're normally talking about it in terms of technical policy, GPUs, inference, and, you know, U.S.-China chip races. Yeah, so, you know, I was paying attention to the AI discussion and noticing exactly what you're talking about, which is that a lot of the discussion around AI
Starting point is 00:03:41 tended to circle around those types of issues, policy issues, programming issues, scientific issues, and this kind of thing. And what I felt, as you could say, a mythologist, is that something was being missed in that conversation. And when you really take a dive into AI and you see the powers that are being discussed, right, the scope of the powers that are being discussed,
Starting point is 00:04:08 so you have the power to create alternate realities instantaneously, to deceive millions or billions of people at once. These are what you could call mythic powers. These are powers that human beings have never, never really had before in many ways. And so I started to feel that really the only way to accurately talk about the implications of AI was to speak mythically. If it was just reason at play, we wouldn't keep trying to create things that have a really good chance of going haywire and destroying us, right? But yet we do. Why do we do that? Well, there's a part of us
Starting point is 00:04:47 that likes to tinker with what you can call mythic powers. There's a part of us that's drawn to There's a part of us that wants to see what happens if we just open the spell book, like the old story, The Sorcerer's Apprentice says, and start casting spells. And so I think understanding, yes, on the one hand, you know, obviously policy discussions and technical discussions are incredibly important. I think looking at the myths gives us a framework through which to start to understand both the deeper human drives at play and what you could call the greater implications of powers that we're tinkering with. I hear you say that. I can see two different worldviews. And I'm going down this path because I can imagine some of our listeners are like, really, we're going to be talking about myths? Why does that have anything to do with the hard power of AI? And, you know, on one hand, I sort of hear Joan Didion in my mind saying we tell ourselves stories in order to live.
Starting point is 00:05:50 And the stories we tell ourselves become our values and our identity and therefore everything we do. On the other side, I hear you saying, oh, you know, there's some deeper reason why we are pushing forward. forward with AI so fast and there's some like skeptic in the back of my mind that says well it's not really about that it's about power it's that if I don't build this new power then somebody else will so I'm forced to show up to build that power and in fact I believe myself to be a good actor and so I should get there first so that it's not a bad actor that is controlling this new power and has nothing to do with like myths and stories this is just about power and so I would love for you to respond to that and like reinstantiate why is it that these stories and myths are a good and predictive way of
Starting point is 00:06:37 looking at the world well what you said about joan didion and you know the power of story and the power of narrative everything is driven by mythic narrative in one way or another presidents are elected through mythic narrative societies unfold the way that they unfold through mythic narrative there's a mythic narrative about what the united states is that has driven the united states forward for many years. There was a mythic narrative about what the former Soviet Union was that drove them forward for many years.
Starting point is 00:07:07 There is definitely a mythic narrative at play in AI because if the only drives at play were rationally you'd say, okay, the risks clearly outweigh the benefits and so we're not going to do this. And yet, it's been a self-fulfilling prophecy all along. So, you know, you can't separate
Starting point is 00:07:21 if you look at just the what you're talking about kind of the most recent 10, 15 years of power dynamics and play and everything. You can't actually separate out the degree to which that kind of science fiction narrative AI has actually determined AI outcomes. I mean, you know, it's been a big part of it. The people who get involved in the programming, I guarantee you, they know the larger narratives about AI, and they
Starting point is 00:07:47 know these kind of tech utopian visions, and they know also these stories of how AI might take over and kill us all, right? And those narratives are interwoven in the entire AI story, and they're not extricable. So understanding, what do humans do when confronted with powers that we can't control or powers that, you know, potentially world-altering? How historically, say, have cultural traditions treated access to such powers? How do you avoid the Sorcerer's Apprentice Story, which is a story that's told and retold in various cultures in various ways
Starting point is 00:08:24 all throughout history and all around the world? And it's a very simple story. It's like the young acolyte wasn't ready for the powers that he unleashed and look what happened, right? Why does that story keep getting told in the myths over and over again? Because in one way or another, humans have enacted it. For listeners, they may not be fully familiar with that story. I think for most people it's probably like where they touch it is Disney's Fantasia, where Mickey Mouse is like working with the sorcerer and sort of this cute feeling story.
Starting point is 00:08:54 So can you give a brief recap of that myth for those. who aren't familiar, and then draw the metaphor out to understand what it helps us understand that's surprising about the race for artificial intelligence. Yeah, so the Sorcerer's Apprentice, the first known telling of the story goes all the way back to ancient Egypt. And a lot of people probably don't know that, that it's a story that has roots that go back thousands of years. And in ancient Egypt, they were very into sorcery, so it's an understandable story.
Starting point is 00:09:27 And like you said, many people may be familiar with it through Fantasia. It's a very simple story. There's a sorcerer, a magician of great power, and he has an apprentice, a young assistant, who's just started on his path of learning the ways of great animacies and intelligences, great powers. And the sorcerer has to go out and do an errand, and he leaves the apprentice in charge of the workshop.
Starting point is 00:09:54 And the apprentice, when the sorcerer leaves, gets an idea in his head. Like, well, you know, I'm ready. I've learned enough. That guy's always telling me no. He's always telling me not to go into the big spellbook. And so the apprentice opens the spellbook and starts casting spells, starts saying things aloud. He conjures up animate brooms to go fetch water, but then they just go, they keep fetching water over and over again. and pretty soon the whole workshop is awash with water
Starting point is 00:10:26 and there's water pouring out everywhere and he's trying to find the part of the spell that can contain what he's just unleashed because unleashing things is easy but containing them is not so easy after you've unleashed them. So I have a friend who I've spoken to informally and really asking why are these AI labs racing
Starting point is 00:10:50 to build AGI given all the risks and he wrote back to me about a lot of the conversations that he's had with the small number of people building AGI at these major labs. And he wrote the following. In the end, a lot of the tech people I'm talking to, when I really grill them on it, they retreat into number one, determinism, number two, the inevitable replacement of biological life with digital life.
Starting point is 00:11:16 And number three, that being a good thing anyways. And he goes on to say that at its core, it's an emotional desire to meet and speak to the most intelligent entity they've ever met and they have some ego-religious intuition that they'll somehow be a part of it. It's thrilling to start an exciting fire. They feel they will die either way, so they prefer to light it and see what happens. So Josh, what comes to mind when you hear that? Yauza. Yeah, that's a whole area which I talk a bit about on the podcast episode, which is part of the initiatory drive, part of the want to be, like, taken down to size is an extinction drive, right? The part of us that longs for
Starting point is 00:12:03 annihilation is the same part of us that longs for mystery, that pushes the boundaries until something triggers a series of consequences that comes back on us. You know, this is very deeply embedded in human beings. The drive for something unpredictable to happen, the drive for something mysterious to present itself. And this is part of why I say that really the entire framework of the AI discussion is religious. You know, listeners may bristle at that, but like what you just read me is a religious quote with an apocalypse narrative embedded within it. And it gets to like, as I say in the in the episode, how long were human beings really able to live in a non-animate world, right? We kind of deanimated the world.
Starting point is 00:12:51 We said, oh, scientific rationalism is it, and, you know, there are no greater forces, and there is nothing around us that's greater than us. And then immediately, in the age of new technology, we said about creating forces that are greater than us, right? And this has to do with human beings. We actually do, this is just my opinion, but we do live in a world of greater forces. greater natural forces, time, the movement of seasons, the movement of ecology. We do live in a world of greater natural forces. And part of us instinctively recognizes that this is our deeper alignment and seeks to create that and animate that and bring that to life no matter if we've
Starting point is 00:13:32 convinced ourselves or not that that is a falsity. One of the things I think about this to ground this for listeners, you know, every day we wake up at the Center for Humane Technology and we ask ourselves, what would it take for this to go well? Like how do we bend the curve of where all this is going and the competitive forces and all of that? And how do we create the right container structures, the right ethics, the right policies, the right technical controls, the right design considerations, the right mindsets, the right paradigms and worldviews that would allow humanity to be creating these powerful technologies in a way that doesn't cause what we have been outlining for 10 years with social media and what we call first contact with AI, you know, a disaster of mistakes.
Starting point is 00:14:15 And one of the things that I really appreciate about your work, Josh, and why I was so excited to have this conversation is I think even with the perfect economic incentives and even with the perfect international agreements, think about the psychology that's driving this. And you've talked about this a lot in your podcast where very much the psychology is building a god, you know, building a super powerful thing. I mean, just to quote Demis Hasabas, who's the co-founder of DeepMind, who said our mission statement at DeepMind, which is the first company whose mission was dedicated
Starting point is 00:14:46 to building artificial general intelligence, the way he said it is solve intelligence and then use intelligence to solve everything else. Because if humanity can solve intelligence, you can solve every scientific problem, every game theoretic problem. You can solve every space, you know, energy problem, technology problem, transportation problem,
Starting point is 00:15:06 environmental problem. And so this is already the psychology of what will it take to build this, and then you have Sam Altman and other sort of joking that we're kind of building a god, Elon Musk joking, we're summoning a demon. I guess what I'm trying to get to is, even if you had the right policies and liability frameworks
Starting point is 00:15:23 and agreements about what we're building, there's a kind of relationship at the human level that we have to building this that I think you're pointing out is problematic, that there's a new way we need to relate to technology as humans, and myths have a powerful role in creating that right relationship. That's one of maybe wisdom or responsibility rather than sort of rushing
Starting point is 00:15:45 to tinker and to create and to deploy as quickly as possible. Yeah, I mean, wouldn't it be amazing? We'd have all the intelligence in all the world. We'd be able to see into everything. We'd be able to understand everything. We'd have this incredible computational ability. And it still wouldn't teach us one thing about how to actually live in a human body and to love our neighbor and to actually get along with each other right so the so wisdom and intelligence as you know are two different things intelligence has to be embodied right like all the computational ability in the cosmos is not going to give us the ability to solve for example hunger or poverty if we don't find it in our hearts to actually want to do that right and you know i relate it to
Starting point is 00:16:38 what we know about the internet already, which is like, does having access to all the information make our lives more livable, more meaningful? There's certainly times when having access to all the information helps, right? Like if I'm doing research on a project and this type of thing, but we've seen what has happened with the younger generation that has access to unprecedented amounts of information. And this is where I would question that mission statement and say, like, having all the intelligence actually doesn't solve anything. Like putting it into Putting it into practice is what solves things. We already know, I mean, and now we're seeing a generation that is raised on information,
Starting point is 00:17:17 and the ability of younger people to actually interact effectively with the world is waning, and this is being shown in numerous studies that say that kids can't even name five species of plant that live in their immediate area. Well, that actually happens to be incredibly important to know the foundations of your local ecology, attention span is waning, emotional intelligence is waning. It's far easier to disconnect. You know, meaningful relationships are difficult, right? And they don't operate on TikTok time spans.
Starting point is 00:17:49 So you're already seeing that. And, you know, my feeling with AI is that this is just going to be increased exponentially. We're going to have access to all of this disembodied intelligence and have absolutely no idea what to do with it. You know, and the human question will still remain the exact same, right? the human question, which is one of slow learning and putting into practice over time and actually taking all this theoretical stuff that I've learned and learning how that applies to my life and what it means to be a better human being through that,
Starting point is 00:18:19 like that's going to become more and more and more challenging the more information we have, not less challenging, more challenging, right? Now does that mean it can't be used for good? I thought, of course it can. Of course it can. And I've seen applications of AI already in my own life. I'm not a Luddite, and I've used AI for research purposes and found it very, very effective. But the overarching ecosystem into which it's going to be thrust is one of attention grab and monetization and polarization and keep eyeballs on this thing at all costs.
Starting point is 00:18:58 And we've already seen the effects of that with social media, and I think we're going to see it. amplified with AI. Which is essentially the veil of Maya, right? That's the Buddhist concept of the veil of illusions. The way we as humans get disconnected from the way reality really is and instead trapped inside of our stories and fears, desires, whatever other projections we have. I think AI risks having humanity get completely lost in a computational veil of illusion. Yeah, absolutely, the veil of illusion.
Starting point is 00:19:29 and, you know, historically, that young, eager, acolyte initiate who really just wants to tinker with things and really wants to get these kind of new technologies out there in various ways in the myths, right? They haven't lived long enough yet. They haven't had enough life experience. They haven't had enough guidance to actually know how to discern reality from illusion, right? For example, like a young programmer might get this insight into this thing that they could build that would be the thing that's going to make them a billion dollars or something like that. They might have this real strong feeling about it.
Starting point is 00:20:07 And that's usually the time like in a traditional system where there would be some system of accountability or some elder that would come along and say, no, you've got to wait a little bit. You're rushing into this too much. You're coming at this in an agitated way, right? You've got to slow this down, or it's going to cause unpredictable havoc in the community. I've been in these discussions a short time, but I've been in long enough to know that as soon as you suggest slowing down, there's people who say, it's preposterous, we can't slow it down. There's no way, like you're saying, like we have to compete with China, we have to do this, we have to do that. For each individual, I think there's a real question here around, what am I rushing this out for, really?
Starting point is 00:20:49 What would happen if we were to slow down? When we're talking about world-altering technologies, there has to be a voice in the discussion that says, this needs time. It's whether we as a species can learn to do the delayed gratification thing. If we can, maybe we have the maturity to make it. So we have to develop intelligence and the wisdom to wield that intelligence at the same time. Another way of saying it is, like, instead of saying first solve intelligence, then use that to solve everything else. It's more like first solve intelligence, and if we don't bind that to wisdom, it'll break everything else.
Starting point is 00:21:24 Yeah, I mean, it's simple, right? These stories are so simple that they get really overlooked. You know, the ancient stories tell us that once we unleash something, it's a little hard to put the genie back in the bottle, as they say. Right? All of this is really simple wisdom. And I said this in a recent talk out in San Francisco. You know, there are people, young people now, who are tinkering with world-altering technology.
Starting point is 00:21:50 and they don't even know how to make a fire. Fire is the oldest technology of all, making fire the old way, the old technology of fire. It's slow and it's steady and it requires patience. And it gives us an introduction, right, to something that can provide warmth and nourishment and can cook our food and can also burn the whole house down if it gets out of control.
Starting point is 00:22:15 How do we work with that? How do we start to slowly and surely understand, like, containment within the process of ideation and new ideas and new technologies and then understand what it means that like the fire needs to be contained a little bit too. I really like that metaphor and to link this very, to some people sounding woo conversation to practical things. I know that one of the architects of the White House's executive order on AI wrote a book first on AI called New Fire and sees AI as a new kind of fire that humanity actually has to care with.
Starting point is 00:22:49 And when I think about that image of sitting there tending to a fire, what do you do with the fire when you first start playing with is you create a container for it. You don't just start tinkering with it and putting sparks near a big forest. You figure out where you want to contain your fire. And you made the point, I think, in that talk
Starting point is 00:23:05 where we were together as a Wisdom 2.0 AI conference in San Francisco that a lot of the people who are building this actually spend a lot of time out in nature tending to fires. Yeah. And there's a way in which that is extremely important. And that fire story, you know, it's the story of Prometheus and the Greek myths. You know, the word Prometheus means forward thinker, right? Prometheus means like to think forward whose mind is going forward. And it's that roving intelligence that always wants to
Starting point is 00:23:34 move forward, that fire of ideation, that fire of creation, you know, which is a beautiful thing. It's propelled human beings in many beautiful directions, right? But what ends up happening to prometheus as he ends up being chained to a rock right he ends up being forced to be stuck in place because if that fire of forward thinking gets out of control eventually the earth reminds us that we are still subject to the laws of nature and to environmental consequences we don't transcend the laws of nature we're still bound by them and i just want to say you know you and i have talked about this you've used the word woo twice now right and uh and coming from the mythic perspective like all this talk on myth. I mean, it's not woo. It's how human beings have understood reality
Starting point is 00:24:19 for thousands upon thousands upon thousands of years, right? And what I tend to say when it's like, oh my gosh, you're talking about AI and you're talking about myth, isn't this kind of woo? What I tend to say is like, I hate to break it to you, but AI is pretty woo. The whole premise of AI and creating roving intelligences, I mean, there's a lot of ideology driving AI. Just the rabid to quest for general intelligence, that's ideologically driven. It's not simply a rationalist exercise. So it may be that we need to expand a little bit into spaces that have been traditionally called woo, quote unquote, to really fully grasp what AI is. That's a shared story around which we unify. And so, of course, the stories that we choose, the myths that we use end up determining
Starting point is 00:25:06 which way large groups of people apply their will, which is sort of why they matter. And then and sort of want to like ground this again. Like, what are the initiations from the path? What does an initiation even look like? I mean, in traditional cultures, it's recognized that there's a certain phase in development where the young person is testing the boundaries and seeing what they can get away with and might be getting kind of grandiose visions of like who they are and their own individual value and importance, you know, above and beyond the importance of.
Starting point is 00:25:42 of relating in community. And at those times when the individual is pushing the boundaries, you know, adolescence tends to be one of those times, there are tried and true ways of both bringing that person down to size, kind of obliterating that drive that might seek to put them above everyone else, and also saying if you walk step by step, then you can learn what it means to embody. great knowledge to hold great knowledge, but you have to do it step by step. So there are initiation rights. I mean, every single culture traditionally that has ever walked the planet as far as anthropologists are concerned have initiation rights of one kind or another. Ours have gotten a little
Starting point is 00:26:28 bit muddied in modernity, but they're still there to some degree, college graduations and this type of thing. But there have historically been rights that have put, you know, the young initiate it through a process in which they're taken down to size, in which they realize that, you know, oh, I'm not the total of everything. I'm just a piece in a larger network, a larger framework. Could you give just quickly for, like, just rattle off a couple examples from history that you think particularly illustrate this? Yeah, I mean, people have probably heard of things like Vision quests. A lot of times these would take the form of communal dances that would last like a really, really long time to the point that people are on the verge of exhaustion and literally
Starting point is 00:27:14 can't get by without the help of others. They would involve fasting. They would involve time spent out in the wilderness. And then, you know, this is something that we see in movies and modern day stories all the time, right? It's present in Star Wars and the process that Luke has to go through before Yoda will, like, actually take him on as a student, right? In generation upon generation of kung fu movies you can see like the young eager acolyte comes to the master and says i'm ready and the master says you know really why don't you sweep the temple for a few months and then we'll see if you're ready and you know yeah there are PhDs involved there are people with great amounts of intelligence involved but that doesn't mean that we've gone through a particular developmental
Starting point is 00:28:01 process like that type of intelligence is only one type of growth and it's been recognized that there's a different type of growth that has to happen. And often, like I was saying before, the first word that the young initiate hears is no, right? No, you can't do everything. The world isn't yours on a plate to do whatever you want with. And this is something that you spoke earlier about the dominion that humans have achieved.
Starting point is 00:28:26 And I totally hear what you're saying. And I also would throw the question in there about whether we've really achieved any type of dominion at all within the natural world because the world is more out of control than it's ever been, and the climate is more unpredictable than it's been, you know, in thousands of years. And Hartman-Rosa says the more we've tried to create a controllable world, the more out-of-control it's gotten, right?
Starting point is 00:28:49 There are larger forces at play that we're just a part of and what we do to the web of life, we do to ourselves. Like this is basic fundamental knowledge, right? It's basic fundamental knowledge. Like the idea that the purpose of human existence is to do whatever we want, get whatever we want, take whatever we want, rise to the top, be a billionaire, fly off into space. This is a very specific strain of human thought that has been really an aberration compared to how human beings have seen the world for most of our 300 plus thousand year history.
Starting point is 00:29:25 We consider it to be totally logical to see the world in terms of what we're permitted to do, what we get to do, hey, I'm free to do this, I'm free to do X, Y, and Z. responsibility has such a hollow ring in the modern ear right because it's not something we really want to think about oh our responsibility towards something it sounds so heavy you know but that responsibility is is actually it's a basic recognition of ecology it can't just be forward at all costs and this really i think underlines the the myth part of the the stories we tell ourselves in order to live the who we must be it is insufficient to just have our external environment and incentives, try to guide us towards a good destination. This is fundamentally about
Starting point is 00:30:11 the us that we are stepping into, a more mature version of ourselves, and it's inadequate to not do that. Yeah, I mean, especially when you're within a cultural mythic narrative that says that the individual wants and needs exist above everything else and that the real value of a human life is in individual genius and individual progress and this type of thing. And what you're, what you guys are saying is exactly, you know, I talk about this a lot in terms of environmental consequences that human beings are facing. Like, you can have all the regulations and all the laws put in place, but if people haven't had a fundamental experience of their connection to ecology and the consequences of ecology, our impacts on ecology, then you can't expect people to change,
Starting point is 00:30:56 right? And this gets to, like, is one class in ethics enough to convince the young coder who's in an environment of, yeah, get it out now and damn the consequences, is one ethics class enough? No, ethics is something that needs to be embodied, you know, ethics is something that needs to be experienced directly and we experience it through the example of others. I'm talking about actually interacting within a web of relationships. And I think for young kids growing up these days, this is one of the reasons why it's extremely important to not have their primary interaction be digital. They need time actually navigating relationships in the world that don't follow the laws of immediate attention span. All right. So this has been fascinating. I want to
Starting point is 00:31:51 close by returning to the Sorcerer's Apprentice. You know, it's a cautionary tale. The brooms are flooding the lair. There's nothing the apprentice can do to stop them. And what I'm curious about is how you would change this story so it concludes in a non-devastating ending that doesn't require the elderly wizard to Russian and then fix everything for the apprentice because you already sort of pointed out we don't really have those kind of elders in modern humanity so what would the story sound like if everything were to go well well I think that there's a few sides to this one is the responsibility that lies with the elder, the responsibility to not leave the apprentice alone with the keys to the lab and figure that it'll be okay. And this is a responsibility, you know, that I feel is on the elders in the
Starting point is 00:32:49 room, the CEOs of these companies and the people who've been studying this stuff long enough to know its implications. That's a responsibility to actually look at what. what good eldership means. You know, if the only climate the young apprentice is growing up in is get it out there, better, faster, bigger, more, that's going to have consequences. And we need more elders who can show an example of what it's like to actually have a long-term vision. And then I think there's a responsibility on the part of the apprentice, too.
Starting point is 00:33:25 And it has to do with, like, really checking motivations, really going deep into what motivations are. Why am I seeking to do this? Where is it taking me? What are the consequences going to be? And the responsibility on the apprentice is to find a good ecosystem in which to grow and thrive. You know, it's hard to say in this day and age, but visions of success might look a whole lot different than they look to us. When we've lived a little bit, visions of success tend to change and we see like, oh, success is in the relationships that I've built. Successes in how I treat other people and show up for other people, right? Success is in creating something that lasts a long time
Starting point is 00:34:09 and isn't just the next immediate attention-grabbing app. So deep reevaluations on the part of the apprentice and deep recognitions of responsibility on the part of the elders, I think would take this story in a different direction. This is a perennial story of human overreach. It's important to tell and retell these. stories too so that we have reminders, strong reminders of potential consequences to human overreach and what we can do to remedy that. Josh, thank you so much for an amos bouch of a conversation.
Starting point is 00:34:45 There are many places in here I wish we could have double-clicked. I know that there are listeners that are like, oh, this area, I don't know if I fully agree with. And I wish we had a time to go through all of them, but let's just consider this the beginning of a conversation. Thank you so much for coming on your undivided attention. Thanks for having me. Great. to be here. Your undivided attention is produced by the Center for Humane Technology, a non-profit working to catalyze a humane future. Our senior producer is Julia Scott.
Starting point is 00:35:16 Kirsten McMurray and Sarah McRae are our associate producers. Sasha Fegan is our executive producer. Mixing on this episode by Jeff Sudaken, original music and sound design by Ryan and Hayes Holiday. And a special thanks to the whole Center for Humane Technology team for making this podcast possible. You can find show notes, transcripts, and much more at humanetech.com. If you liked the podcast, we'd be grateful if you could rate it on Apple Podcast, because it helps other people find the show.
Starting point is 00:35:42 And if you made it all the way here, let me give one more thank you to you for giving us your undivided attention.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.