A Bit of Optimism - Talking to Animals with A.I. ethicist Aza Raskin

Episode Date: November 21, 2023

When did we get so disconnected from the world around us? How can we find our way back?Aza Raskin thinks the answer might lie in humanity's greatest adversary - listening.As co-founder of the Center f...or Humane Technology and of the Earth Species Project, Aza and his team are using Artificial Intelligence to decode the language of animals, from whales to crows, while remaining dedicated to ensuring the accelerating rise of A.I. remains safe and responsibly handled.This is...A Bit of Optimism.For more on Aza and his work check out: https://www.earthspecies.org/https://www.humanetech.com/ 

Transcript
Discussion (0)
Starting point is 00:00:00 What if you could understand animals? I don't mean understanding what your dog is saying to you or what your cat is saying to you. I mean, what if you had a technology that you could understand any animal? Well, that's exactly what Aza Raskin is doing. animal. Well, that's exactly what Aza Raskin is doing. He is using AI to create complex language patterns that actually allow him to decode what animals are actually saying. And the results are astonishing. And the work he's doing is going to revolutionize how we are able to understand the world that animals are living in. But that is just some of what Aza Raskin is doing. He is one of the wunderkinds of Silicon Valley,
Starting point is 00:00:49 a talented engineer who also invented the infinite scroll. And as AI technology advances, he is obsessed with making sure that we advance it ethically, something I think we are all concerned about. This is a bit of optimism. I've actually wanted to talk to you for a while since I heard you give a talk about talking to animals, using AI to talk to animals. It's really refreshing to have a conversation with somebody about AI where the technology didn't just make something go faster, but it actually made something possible that was hereto previously impossible.
Starting point is 00:01:33 Aza, how are you able to talk to animals? Well, it's first important to say it's not just me. It's an entire project called the Earth Species Project with co-founder Katie Zakarian, our CEO. We work with something like 80 institutions and biology labs. So it's a whole field that's now working on talking to animals. And actually, even saying that is the wrong way to say it, because it's not really about talking to animals.
Starting point is 00:01:59 It's about learning how to listen to the natural world. And what I think most people don't know is actually how much is already known. It turns out dolphins have names that they call each other by, even in the third person. So they will talk about each other when they're not there. One of the major hallmarks of language. One of our partners recently discovered this, this last couple of years, that orangutans have a kind of past tense, that they can talk about things that are not happening now. So we have not here and not now, two of the big hallmarks of language. Off the coast of Norway every year, there's a group
Starting point is 00:02:36 of false killer whales, just sort of like an orca and a set of dolphins. And they each speak their own different way when you listen to them. And every year they come into a super pod and they each speak their own different way when you listen to them. And every year they come in to a super pod and they hunt. And when they do, they speak a third different thing, which is wild, right? Just think about it. Humans have been around speaking vocally, passing down culture for, I know, a hundred thousand years, 300,000 years tops. Whales and dolphins have been doing this for 34 million years, passing down culture for 34 million years. And of course, that which is oldest, you know, correlates with that which is wisest. If something has lasted that long, it probably encodes some deep truth, maybe something we really need to know to like solve all of our biggest problems. And we're at the cusp of being able to learn from these other species. When we think of earth species, I really think of it as a project in using AI
Starting point is 00:03:29 to make the sacred legible, make that thing which is valuable beyond worlds, comprehensible to human systems. Now, I know I didn't answer the question how, but I thought I'd start there because most people like talking to animals. Is that even really a thing? I have one more example if you want to hear it. Yes. Yes.
Starting point is 00:03:47 This is a 1994 study. It's an unpublished study from university of Hawaii that actually one of our collaborators, Dana Reese originally turned us onto and here they taught dolphins two gestures. And the first gesture was do something you've never done before. And if you think about that, that's a very complex concept to be able to share, to articulate, right? It means innovate. I understand like from going to dolphin shows as a kid that the trainer would make a hand motion or make a whistle and the dolphin would do a flip. So what you're saying is the hand motion or the whistle told the dolphin, do something
Starting point is 00:04:22 you've never done before. Correct. Yes, that's exactly right. Which if you think about it, they have to remember everything they've done before, understand the concept of negation, not one of those things, and then invent a whole cloth, something that they hadn't done before in that session. And they'll do it. It took the trainers a lot of fish and patience, I'm sure. But then they taught the dolphins a second gesture, do something together. And they say to two dolphins, do something you've never done before together. Innovate together. And the two dolphins would go down, exchange sonic information, come out and do the same thing
Starting point is 00:04:54 they had never done before at the same time. I was going to say the first part, the cynical bastard inside me could say, well, that could be classic conditioning, you know, where they got the treat when something was new. The whistle means, if I've seen it before, you're not going to get a treat. I get it. They had to negate something, do something new. But that they could coordinate and communicate,
Starting point is 00:05:16 let's do this thing and come out together, that's amazing. Yeah, right? Like, if they're able to do that, and it's not just one dolphin pair, they replicate it over multiple dolphin pairs. Right. Like, it sort of if they're able to do that, and it's not just one dolphin pair, they replicated over multiple dolphin pairs. Like it sort of says they're probably communicating and we should start figuring out how to disprove that. So that's a 1994 study. So at what point, I assume relatively recently, because the amazing thing about AI, I find, is the speed.
Starting point is 00:05:39 And so I'm so curious as to when this community that you're working with, when they started to take advantage of the new technology, and I'm very curious what they discovered recently that, again, was impossible without the technology. community figured out how to do what was essentially impossible, which is if we're going to translate a language that has no Rosetta Stone or examples of translation, like how are you going to do that? The first thing to know is just how hard field research is doing. We've spent the first three years at Earth Species, we as co-founders, going out into the field to Alaska to study the humpbacks, to the Congo rainforest to work with forest elephants to learn from all these biologists. And it turns out it's super hard. So to give an example of how hard this is, belugas have this incredible communication. When they speak, they will include their name and their clan identity and their hellos. But it turns out, like Dr. Valeria
Starting point is 00:06:41 Vergara, who does this research, she has tags that sit on animals that record audio, record video, record motion. She still could only use 3% of her data because she couldn't tell who was speaking. They're overlapping. They're in these groups of 20, 30, 40, 60. They're constantly moving around. They're always communicating. They're always talking. It's family dinner. Exactly. It's family dinner all the time with like dozens of families on top of each other. It's a cafeteria. There we go. That's right. It's family dinner. Exactly. It's family dinner all the time with like dozens of families on top of each other. It's a cafeteria. There we go. That's right.
Starting point is 00:07:07 It's a cafeteria. Here we have the most vocal underwater animal with probably the largest vocabulary. And Western science has been unable to peer into 97% of, say, herd data because the technical problems of listening to multiple animals, separating them out into their own individual tracks, denoising all the background, all of that is really, really, really hard. And so a lot of the original work that we've been doing are building these sort of models that let us do these foundational things. I mean, you know, the first paper that we published in scientific report was the first time we were able to listen to multiple animals speaking at once, separate them out into their own tracks. We published the very first like big benchmarks
Starting point is 00:07:47 for animal language to know whether AI models are getting better. So I just want to like give people that background before saying, you know, we're starting to work with these incredible crows. Yeah. They have a unique culture, like they're genetically identical to all the other crows, but in this culture, they do communal child rearing. It's sort of like a commune or a kibbutz. And they have unique vocabulary that no other crow group has. And they will take adult outsiders in, teach them their vocabulary, and they will become part of their community in raising their children together. And we're working with these incredible researchers that have
Starting point is 00:08:26 tags on the crows. So they like record motion and they record audio and we can see it. There are things that the crows will say to the young before they leave. There will be specific things they're all talking about. It appears just before they come back to the nest. We're really starting to be able to do this translation from motion and behavior into audio and meaning, but it's too early yet to be like, and here we go. Have you been able to discern a conversation, like a warning, or can you go get the kids at three o'clock from school,
Starting point is 00:08:57 or some sort of something that we would relate to? So an example of this, although this is not our work, but we're building on it, is campbell monkeys have alarm calls and a simple suffix. So hawk for them means eagle, crack means leopard, and then hawk-oo means predator that's up, crack-oo means predator that's down. So they definitely refer to things and then even have these very simple, at least so far as we've discovered, syntaxes, morphologies that programmatically change the meaning of their vocabulary. Wow. So they can – it's not just an alert to danger, but it's an alert to danger with position.
Starting point is 00:09:36 Yeah. A type of, a category of. A category of. That's astonishing. Have we yet been able to, Dr. Doolittle, this, have we been able to take the language models and be able to type something into a computer and the computer can then speak to the animals and get the response that we hope to get? That's a good question. So the step that we need to do before we even touch something like that, and of course, it is unclear yet how much of what we can say directly in human language will be meaningful to an animal and back and forth. But one of the things we're doing, one of our research engineers, Jen Yu, he is working on a real-time audio language model that speaks in the language of a kind of bird called a zebra
Starting point is 00:10:20 finch. They're a songbird. They do vocal learning. So they learn their call as young. And here's the weird thing. This is the big plot twist. Before we understand what we're saying, we will be able to anyone in the world who speaks a language you don't understand and you just sort of listen for a while you put your head on your side and you're like oh i don't know what anything means but when this sound comes this other sound comes in this contingency and then you just start to babble you're just like i don't know what i'm saying but i'm just babbling and you're babbling but the other person's like wow what you're saying is so meaningful hmm and you're like i don't know what's going on over there, but when I do stuff, they seem to have a meaningful interaction. We're going to be able to do that. We're doing our first experiment in a controlled setting next year with these
Starting point is 00:11:17 zebra finches that we end up in a two-way conversation before we fully know what's being said. What about these amazing stories of somebody being shipwrecked and dolphins or orcas come and save the person? We've heard these stories. Have we been able to discern that animals recognize when we're there? Are they communicating about us? That is the perennial question. I mean, they obviously can discern that we're there. And the question is, and are they communicating about us? I will just say two experiments that I love. Do you know the mirror test?
Starting point is 00:11:52 Go on. This is a test for self-awareness. So in this test, researchers will paint a dot on an animal in a place they can't see. They'll then give them a mirror. And if the animal looks in the mirror and starts like trying to touch the dot, it shows that the animal has connected the image in the mirror with themselves, that they say, that is me. I am aware of myself. And dolphins do this. Chimps do this. Elephants do this. Although they didn't for the longest time until researchers realized they were just using a small mirror. They needed a big mirror for elephants to see themselves. Who knew?
Starting point is 00:12:26 So it shows you the anthropocentrism. And so if animals are communicating, they may well be communicating about a rich interiority, a sense of self-awareness. And like what is more profound than that? Pretty much all of the myths start with humans being in connection with nature and able to talk with animals. And what happens? It's at the point that humans disconnect are no longer aware really of the world they live in, that we lose our ability to talk to animals. And I think that points at like the fundamental sin of humanity is,
Starting point is 00:13:02 is disconnection first from ourselves, then from each other and the natural world. at like the fundamental sin of humanity is disconnection, first from ourselves, then from each other, and the natural world. Another way of saying that is, if you really zoom out, is like if you look at all of humanity's biggest problems, whether it's an opioid epidemic or climate change or pollution or mental health crisis or the attention economy, these all take the same form. And it's the form of a narrow optimization at the expense of the whole, focusing in on something little and losing everything. And I think that's what those stories talk about. And I think about what earth species
Starting point is 00:13:37 is about. It's fundamentally about making the sacred legible. It's about making that which is valuable beyond words comprehensible to us and our systems so that we may reconnect and be aware of the harm that we may be causing. Where my mind went was we can dehumanize other human beings. In war, both sides will dehumanize the other because it's easier to kill that which we do not perceive like us. We do it in society all the time. We even do it in politics. Our ability to reduce other human beings to a point of view or an opinion or a non-human makes it very easy to attack them and hurt them. warring parties together and you force them to have conversations that have nothing to do with the thing that they're warring over, but to recognize that we have families and that we
Starting point is 00:14:29 have children and that we have friends and that we have lives and then we have ambitions and we have insecurities. And only when we humanize each other, do we find hurting each other much more difficult. This is sort of a ghost set, which is an elephant sounds his trumpet. Okay. That's because it's a call to danger because he's giving us a warning. She's giving us a warning because their baby is near. The snake rattles its tail to tell us to stay away. That's as far as we got. But if we can start to understand that they're making jokes or they're telling each other about the weather or saying, hey, some good food down yonder.
Starting point is 00:15:01 It anthropomorphizes it. It humanizes the animals. What if plants are communicating, you know, that we anthropomorphize and humanize the nature? Then it makes it much more difficult to blindly kill the nature that we share the world with. I think that your work can only be good for the challenges we have in climate change
Starting point is 00:15:23 because we see the world around us as disposable so often. Yeah, that's our hope. We are aware that climate change, species extinctions, these things are driven by complex systems, driven by geopolitics and incentives. But within that, there is a shift to human perspective where when we see the natural world as something that we're part of and not apart from, and that actually, you know, if animals speak, and hence they have consciousness,
Starting point is 00:15:54 it becomes much harder to pretend like we can just blithely ignore them and kill them. Like, I think it really changes our orientation. And I think about that album, The Songs of the Humpback Whale from the 1960s. That's Roger and Katie Payne released this album. And it's the first time that Western society had heard whales sing. And it's haunting. And it's beautiful. It's emotional. That album went multi-platinum. It went on Voyager 1 to represent not just humans, but all of Earth on the Golden Record. Aza, we actually have a sample from that album that was sent on the Voyager. It was Roger Payne's Song of the Humpback Whale.
Starting point is 00:16:39 Here's what it sounds like. It was the reason. Getting played in front of the General UN Assembly, it ended up beginning the movement that caused the ban of deep sea whaling. It's why we have humpbacks today. So I do think these moments of increasing our sphere of care and empathy have massive implications on policy, on our identity, eventually on geopolitics. It really can be the basis for changing systems. I want to change tacks here. Your work in trying to help us define ethics, a code of ethics, for the internet and for AI. Thank you, and I'm sure a Sisyphean-like task.
Starting point is 00:17:39 Let's go back in your own history. You are credited with inventing infinite scroll, which creates the behavior of doom scrolling. I'm sure it was an amazing discovery when you made it. What are your feelings about your own work? Is it a bit of an Oppenheimer moment when he first sees the blast? He's like, ah, shit, I am death destroyer of worlds. Well, it was, I mean, a little bit. I mean, not that I want to compare Infinite Scroll to the invention of like, you know, the atomic bomb.
Starting point is 00:18:11 No, but the realization that you've maybe let something out that perhaps should have been left in the lamp. The really big lesson there for me is that thinking locally, acting locally morally can become globally immoral. Say more about that. What does that mean? So infinite scroll, the reason why I invented it was I was thinking about, and I just want to be self-aware that if I invented it, it's not so big of an invention that it wouldn't
Starting point is 00:18:35 have been invented multiply elsewhere. But what does infinite scroll do? It like solves the problem of if you're scrolling and you haven't found the thing you're looking for, show me more. It like solves the problem of if you're scrolling and you haven't found the thing you're looking for, show me more. Every time I, as a designer, ask you, a user, to do something you don't care about or you didn't need to do, I failed. So it actually makes sense. And I invented it really before a lot of social media had gotten really going.
Starting point is 00:19:02 So this was for like blog posts and search results is when I was imagining it being used. And it is. It's just objectively a better interface. Yes. As long as it's being used in your service. And what I was blind to is that making it better for you locally, when it's you who are driving the thing is very different that when it gets picked up by the machine, which is the attention economy and social media and apps that need to keep you and keep your attention and keep your engagement. And suddenly it weighs, you know, a hundred thousand human lifetimes a day. And so acting locally, morally, I was doing a good thing, but blind to the incentives that would drive the technology's actual adoption is to act globally immorally.
Starting point is 00:19:47 But in your defense, that's impossible for people to imagine. Most inventions, I'd even go so far as to say all inventions are solving a local problem. And it could be something completely silly, like I can't get a good sandwich. I'm going to start a sandwich company and make my own sandwich. All inventions are solving a real problem in the moment. And very few, if any of us are able to understand what will happen months or years down the line as our invention becomes commoditized. How do we even invent a code of ethics? Everybody thinks they're on the side of good solving local problems. Nobody's aware of the global problems that they could produce. The internet being one of them.
Starting point is 00:20:31 Social media being another one. We are circling now the very heart of the hardest problem humanity is going to have to solve. Because the power of technology keeps going up. And it's going up exponentially. keeps going up and it's going up exponentially. An important question to ask is if you're to go back a thousand years, how much damage could a single person do with an accident? And you'd be like, well, not very much. You know, they might be able to like, I don't know, drop a table on someone or like, I don't know, maybe a cannonball misappropriately fired or something.
Starting point is 00:21:02 If I guess they didn't even have cannons back then, but they're not very much damage. But then come to now, how much damage can a single person do? They can do a lot. If you accidentally release a virus out of a biosecurity four-level lab, you could potentially kill millions or hundreds of millions of people. So what does that mean? And the next question next question is, is, is that power to cause harm going to go up or down? Is it's going to go up? It's going to go up at a faster rate or a slower rate. Well, science is progressing. It's going to go up at a faster rate, especially with AI. And that means the ability for us as human beings to make these kinds of mistakes where we invent locally, but we don't project out to how that might cause
Starting point is 00:21:45 harm to everyone, the cost of those mistakes is going up and up and up. And at some point, we will not be able to bear them. But all the incentive structures inside organizations or in society are encouraging us to solve local problems and be willfully blind to the global repercussions. That's exactly right. The only solution, I mean, good luck passing legislation on that. Legislating ethics is asking people to sign onto a treaty that they don't follow. It seems the only solution is an authoritarian regime because if you look at nations like China where the Chinese Communist Party has taken a more heavy-handed approach to this
Starting point is 00:22:24 stuff, and I'm not talking about propaganda. I'm just talking about healthy things. Like, we have trouble getting our kids to get off social media and get off phones and get off addictive devices. In China, the device won't work for the kid after a certain amount of time. Like, it's regulated. When the global challenges show up, they just turn it off or they just restrict it. Other than that, I can't think of how the West is actually going to take care of itself. I think in the way the West is currently set up, that's probably right because we've let ourselves get debased with social media where we can't pay attention. We've become polarized, misinformed. So that makes it very, very hard to coordinate.
Starting point is 00:23:06 misinformed so that makes it very very hard to coordinate oh but i just want to be clear that's not endorsing the china model but it's accurately naming the problem to which i think there are reasonable democratic solutions it just doesn't look like the way our country is set up right now oh isa this is really uncomfortable i didn't expect that i thought we were going to have a nice conversation about how to communicate with animals. And this got very dark very quickly. This is why I have to work on both of these projects because I need something that gives me wonder and awe. And I then need that so that I can show up.
Starting point is 00:23:38 Optimism is in the name of your podcast. People ask me all the time, are you optimistic or are you pessimistic? But actually, people ask me all the time, are you optimistic or are you pessimistic? And I hate that question because to choose whether you're going to be optimistic or pessimistic is, I think, to abdicate responsibility. It's fundamentally about asking the question, how do I see this situation clearly? And then how do I show up and get as many people as possible to show up to do something about it? I need to go back down this horrible rabbit hole. The criticism the West has of the more authoritarian control over the internet is that it is a foundation for propaganda, which is true, which is when the government can control the information and control the narrative, then there's a point of view, which is the government's point of view, which in America in particular, that like sends shivers down our spine. However, in the model that we've embraced,
Starting point is 00:24:35 which is a bit more of a free for all, propaganda is now everywhere. Like we're all propagandists who are able to get our messages out there and have a single point of view that influences people to see the world one way and not ask questions and lose their curiosity. It's not a question, I think, of better or worse anymore. It's kind of like what Churchill said, which is democracy is the worst form of government, but it's better than all the others. It's the same thing here, which is, which is the lesser of the two evils? Because they're both pretty awful. But again, I think that's setting up a false dichotomy, a false choice. There are other ways of doing representative democracy that, for instance, Audrey Tang is pioneering in Taiwan,
Starting point is 00:25:12 where you have groups of people come together in person with access to experts where they deliberate over the course of a couple of days. And it turns out people, when they do this, come up with really good solutions that cross party lines. There's an incredible film, Goodbye Elections, where they ran one of these kinds of things, an advanced democracy session in the most divided state, Michigan, on the most divided topic, COVID, in the year 2020, with a representative sample. So these were, you know, there were people there who were self-identified as part of like conspiracy groups. Well,
Starting point is 00:25:48 they didn't say conspiracy, but they would say that it was, you know, I believe in QAnon, I believe in whatever. There are people from the suburbs there, you know, African-American folks had lost three family members.
Starting point is 00:25:57 Like it was a full set of Michiganders. Yeah. And at the end of the session, you know, it was done over six weeks over zoom, lots of tears, people coming together, and they ended up with COVID policies that had between like 80 and 90% consensus. It turns out people are generally smart if you give them the space to process information
Starting point is 00:26:18 and talk to each other, but we're not doing any of that. So before we jump to say the West is just going to lose to authoritarian top-down control, we can say, hey, there's a middle path here. And that's super excited. We've never really even tried it. Oh, this is so beautiful, which is we're going full circle here, which is it's all about listening. There we go. Yeah, exactly. We have a world organized around talking and social media is all talking and no listening. Nobody's ever changed their mind because of a comment in the comment section of Instagram. And I think what you're talking about is, and there's a beautiful metaphor, which is there's a magic in putting the microphone in the
Starting point is 00:26:55 water to understand what the whales are trying to say to each other with no bias, with not any intention of trying to tell them what to do, but really learn the meaning. And I think this is the core of good listening, which we are crap at. And it sounds like when we get people in a room together to solve a problem and we start them off by listening rather than preaching to each other, we find solutions that over 90% of the people will agree to and it's homegrown, which is the best thing. That's right. We never change when we speak, we change when we listen. Listen is deeply transformative. And I think that's right. I think that's at the heart of all of the work that I do. And it's come very personally, right? Like for me, the moments that I've changed the most is when somebody, normally somebody who's close to me has the courage and
Starting point is 00:27:40 the grace to point out some way that my ego is showing up and causing harm. And in so doing, when they tell me that thing, and if I can actually listen, I change because I don't want to do that thing anymore. And then the other side of like seeing or being able to hear about your shadow is loving yourself more. And if you love yourself more, then you get to give more love. And if you give more love, you get to receive more love. And that's what we all really want, right? Ego is that which blocks us
Starting point is 00:28:07 from having the very thing we desire most. And listening is the only way that we can lessen our ego. Was there a specific thing that happened to you? Because you were on that path. You were in Silicon Valley. You could be on that path to make your billions. Was there a specific thing that happened that made you change directions to do what you do now? really the 2016 election that made me say all of the doubts that I've had about the way the technologies I've worked on. I helped run design for Firefox and build the open web. I really
Starting point is 00:28:55 believed in the values of democratizing voice and the long tail of creativity and seeing how inadequate those values were that it was creating Trump's and Duterte's and Erdogan's. Those were the moments that I'm like, this can't be like, I can no longer push this feeling down inside of me. Instead, I need to orient towards something which isn't just build another app, build another company and get caught by incentives that I cannot escape. Right. It was this realization of we've now crystallized as the three laws of technology. And these are the ones I wish I'd known at the beginning of my career. The first law is when you invent a new technology, you uncover a new species of responsibility. And it's not always obvious what they are, right? We didn't need the right to be forgotten until the internet could remember us forever. And then the second one is if that technology confers power, if it gives you an advantage, it's going to start a race for that power or that advantage.
Starting point is 00:30:08 And then three, if you do not coordinate, then that race will often end in tragedy. The attention economy is the perfect thing. And the reason why we knew really starting in 2013, the direction that social media would take the world is to quote Charlie Munger, who's Warren Buffett's business partner. If you show me the incentives, I will show you the outcome. Show me the incentives. I'll show you the outcome. If you can name the race everyone is in, then you just have to look to what's at the finish line to know the result. And for social media and the attention economy. It's so obvious, right? If the business model is get human being engagement, that means do whatever it takes to activate the human nervous system. It's a race to the bottom of the brainstem. Of course, we're going to get polarization. Of course,
Starting point is 00:30:58 we're going to turn everyone into propaganda and disinformation machines. Of course, we're going to get mental health crises. Of course, we're going to get mental health crises. Of course, we're going to get backsliding of democracy. And it was so painful to watch because at the beginning, like there are all these incredible stories like social media connects people, it connects small and medium businesses to their customers. And all these things are true. And then people would always get stuck talking about like, well, there are these addiction problems and the misinformation problem and polarization problem. And everyone just seemed constantly blind to like, guys, it's just, it's just the incentives and they'll give you the
Starting point is 00:31:33 ability to predict the future. How did those CEOs react? You know, their business models are based on getting people to stay on their sites for longer. Well, the nice thing is that what we say is very hard to argue against. And there's that Upton Sinclair quote I always come back to, you never depend on a man to see what his salary depends on him not. So there's some amount of that. But a number of the CEOs, Jeff Bezos, for instance, now will point at social media as the reason, one of the core reasons why it's very hard to do anything on climate or any of the world's problems. So I think there's starting to be, besides, you know,
Starting point is 00:32:10 Zuckerberg, a realization of the role social media has had. And then we were trying to paint the picture. Well, then if we want to know where AI goes, because we will get cancer-solving drugs, we will be able to engineer bacteria that eats microplastics. That's all true. And I want to live in that world. I want to live in the world where we can learn and listen from animals. And what is the incentive? The incentive is grow your AIs as quickly as possible to increase their magic powers, deploy into the market as quickly as possible for market dominance and go in a loop. And we know what the outcome of that's going to be in the same way as we know what the outcome of the use of oil is. And I think this analogy is really strong. And it's that what oil is to physical labor,
Starting point is 00:32:59 right? Every barrel of oil is worth, I think, 25,000 hours of physical labor. And so you can take a human being out of the field and replace it with a tractor that works all the time. What oil is to physical labor, AI is to cognitive labor. That is, you know, cognitive labor is when you sit down and write an email, like that thinking that's writing the email, that's cognitive labor, working on science and like working on a science paper, that's cognitive labor. So in the same way that oil set off a race and all of the industrial process set off this race. And the thing that suffered was the commons. And what is a commons? Commons is just like a universal thing we all depend on. Like air is a commons. The ocean is a commons. The weather and the climate is a commons.
Starting point is 00:33:40 Our mental health is a commons. Our attention is a commons. These are all things that we depend on. That when you set off this race, it sets a race with this new technology, giving the ability to harvest a commons, a thing we depend on that was unable to be harvested before. Everyone races in for those profits and all those things that we depend on become under threat, get harmed. That's what climate change is. Right now, nowhere on earth is it safe to drink rainwater. You go to Antarctica, you open your mouth, the drop that falls in has an unsafe level of forever chemicals, of PFOS. little bit about your background. Your dad was obsessed with ergonomics. Your mother worked in palliative care, I believe, right? Is that right? Yeah, that's right. Yeah. Both your parents sort of obsessed with creating a more comfortable world for people, especially ones that either avoid creating pain or reduce pain if people have it, which is what ergonomics and palliative care both are. How did that shared perspective that your parents had affect you and the work
Starting point is 00:34:46 that you do now? Oh, I so appreciate this question. My mom, Linda, she did hospice palliative care, and she, in a very tangible way, helps people die with dignity. And so that's like one articulation of care. And then my father made the Macintosh project at Apple. If you've ever used click and drag, like that was one of his inventions. And it's a very different kind of care through the systems that surround us. When I think about all the work that I do now, whether it's Earth Species Project and learning to listen to animals or Center for Humane Technology, like aligning technology into humanity's best interest. It's about how can we articulate care at ever greater degrees or spheres. And I'm so grateful for the complete luck of having been born to the parents that I was born with because I think those value systems are the ones that I
Starting point is 00:35:49 get to bring out or I try to bring out to more of the world. Can you tell an early specific happy childhood memory that really captures this idea? I'm thinking back to a particular day and my father had his sort of mischievous smile. He's like, Hey, Aza, do you want, do you want to learn something? Um, and so he sat down at a desk with a piece of paper and he said, I'm going to teach you how to prove that some infinities are bigger than other infinities. And I think I was in like fifth grade at the time. And then he did, he did. It's actually a proof that's simple enough
Starting point is 00:36:31 for a fifth grader to follow. And I just remember the feeling of knowing this inextricable truth that I could prove for myself to anyone else that this concept as hoary as infinity, I knew that there were some that were bigger than others and some that they were the same size. And it felt like I was walking around with this secret hidden magic of the world. And it's why I really wanted to become a mathematician and studied abstract
Starting point is 00:37:07 mathematics and why I ended up studying physics and going into dark matter. It's because math and physics are maps of the whys of the universe. Is I think you have become your father, where you are asking us all to join you at the table so that you can show us that some difficult things to understand are more important than other things that are difficult to understand. And the more proof that you give us, like you have become the torchbearer for your father,
Starting point is 00:37:38 I hope that we become the torchbearer for your work too. Thank you. If there's one final thought I would say is, I think my father's work to make the Macintosh, what was that about? That was about fitting a system to us humans, to make it fit us well, to make it ergonomic so we could use it without pain. And I think our collective work is almost the new Macintosh project. And that is, we are living with hyper objects and complex systems. AI as a whole is incredibly hard to understand how it fits into incentive landscapes, challenging to understand and then to geopolitics.
Starting point is 00:38:18 Our job as communicators is the new Macintosh project to make these complex systems fit into our minds so that we can do something about it. I'd even go a step further, which is prior to the Macintosh, you had to understand a computer language to harness the power of the personal computer. And the Macintosh made it a human experience with folders and desktops and clicks and things that were familiar and understandable and deeply, deeply human. And it gave the power to all of us to embrace the personal computer. And I think you don't have to be some mathematician or physicist to understand dark matter, to understand how AI works, but rather we can all use it for greater good and also use it to protect ourselves from it. I really think that change starts at home.
Starting point is 00:39:05 And until we join the movement, that's what it takes. It takes all of us to get involved. Yeah, that's right. Everyone should join this new Macintosh project. How exciting is that? And then we can talk to our cats and dogs as well. That's right. Although I'm in for belugas. Aza, thank you so, so much for joining me. I can't, fascinating, fascinating. And I wish you nothing but Godspeed in your work. Thank you so much, Simon. I really appreciated the humanity of this conversation.
Starting point is 00:39:34 Really interesting. Fantastic. If you enjoyed this podcast and would like to hear more, please subscribe wherever you like to listen to podcasts. And if you'd like even more optimism, check out my website, simonsenic.com, for classes, videos, and more. Until then, take care of yourself. Take care of each other.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.