The Rich Roll Podcast - Excellent Advice For Living: Kevin Kelly On Wealth, AI, Optimism, & The Future

Episode Date: May 1, 2023

Here to give us necessary life essentials is skilled navigator of uncertain times, Kevin Kelly. For those unfamiliar, Kevin is the co-founder of Wired magazine—widely recognized as the bible of the... digital age. He is a renowned futurist, author, and public speaker whose insights into the world of technology and its impact on society have been widely sought after and deeply influential. Over the course of his career, Kevin has authored several seminal books, including Out of Control: The New Biology of Machines, Social Systems, and the Economic World and What Technology Wants. He has also been a prolific writer and commentator on a wide range of subjects related to technology, culture, and society, and has been a regular contributor to publications such as The New York Times, The Economist, and Scientific American. Kevin shares a hopeful vision of the future of technology, and how it will continue to transform our lives and our world for the better. We delve into the latest trends in artificial intelligence, virtual reality, and other emerging technologies, exploring their potential to shape the world in ways that we can scarcely imagine. But the center of today’s exchange is Kevin’ latest book, Excellent Advice for Living: Wisdom I Wish I’d Known Earlier. From setting ambitious goals, optimizing generosity, and cultivating compassion, this is a must-read gold mine for wisdom on careers, relationships, parenting, finances, and more. My hope is that Kevin’s words brighten your thinking about the future and above all, prepare you for the inevitable changes on the horizon. Show notes + MORE Watch on YouTube Newsletter Sign-Up Today’s Sponsors: Momentous: LiveMomentous.com/richroll Whoop: http://www.whoop.com/ BetterHelp: BetterHelp.com/richroll Express VPN: http://www.expressvpn.com/RICHROLL Peace + Plants, Rich

Transcript
Discussion (0)
Starting point is 00:00:00 Hey, everybody, welcome to the podcast. Today I'm speaking with Kevin Kelly, a true pioneer in the world of technology and media, here to help us make better sense of our confusing present and to share an admittedly optimistic gaze into the future. In the end, it's only going to be the optimists who are going to shape our culture. So you want to be on the side of the optimists to make your vision possible. So you want to be on the side of the optimist to make your vision possible. Kevin is the co-founder of Wired Magazine, widely recognized as the Bible of the digital age. He's also a renowned futurist, an author, a public speaker, whose insights into the world of technology and its impact on society have been widely sought after and deeply, deeply influential. Over the course of his career, Kevin has authored several seminal books, including Out of Control, The New Biology of Machines, Social Systems and the Economic World,
Starting point is 00:01:16 and What Technology Wants. He has also been a prolific writer and commentator on a wide range of subjects related to technology, culture, and society, and has been a regular contributor to publications including the New York Times, The Economist, and Scientific American. In today's conversation, Kevin shares a hopeful vision of the future of technology and how it will continue to transform our lives and transform our world for the better. In addition, we discuss Kevin's latest book, which is called Excellent Advice for Living Wisdom I Wish I'd Known Earlier. And the book, as predicted, is excellent. But first, let's acknowledge the awesome organizations that make this show possible. We're brought to you today by recovery.com.
Starting point is 00:01:59 I've been in recovery for a long time. It's not hyperbolic to say that I owe everything good in my life to sobriety. And it all began with treatment and experience that I had that quite literally saved my life. And in the many years since, I've in turn helped many suffering addicts and their loved ones find treatment. And with that, I know all too well just how confusing and how overwhelming and how challenging it can be to find the right place and the right level of care, especially because, unfortunately, not all treatment resources adhere to ethical practices. It's a real problem. at recovery.com who created an online support portal designed to guide, to support, and empower you to find the ideal level of care tailored to your personal needs. They've partnered with the best global behavioral health providers to cover the full spectrum of behavioral health disorders,
Starting point is 00:02:59 including substance use disorders, depression, anxiety, eating disorders, gambling addictions, and more. Navigating their site is simple. Search by insurance coverage, location, treatment type, you name it. Plus, you can read reviews from former patients to help you decide. Whether you're a busy exec, a parent of a struggling teen, or battling addiction yourself, I feel you. I empathize with you. I really do. And they have treatment options for you.
Starting point is 00:03:30 Life in recovery is wonderful, and recovery.com is your partner in starting that journey. When you or a loved one need help, go to recovery.com and take the first step towards recovery. To find the best treatment option for you or a loved one, again, go to recovery.com and take the first step towards recovery. To find the best treatment option for you or a loved one, again, go to recovery.com. Okay, this is a powerful exchange about where our new world of technology is heading.
Starting point is 00:03:59 And my hope is that Kevin's words help brighten how we're thinking about a future that is so rapidly evolving and above all, helps to prepare all of us for the inevitable changes on the horizon. So without further ado, please enjoy me and Kevin Kelly. Well, Kevin, it's an absolute delight to meet you. Thank you so much for taking the time to do this, to coming down. I've been looking forward to this for a very long time. And I got to meet you. Thank you so much for taking the time to do this, to coming down. I've been looking forward to this for a very long time.
Starting point is 00:04:32 And I gotta tell you, I'm a little bit nervous. My outline is like 10 miles long. There's just so many things I'd like to talk to you about. I'm sure we'll get to 5% of them today. But I guess my head is just jumbled with too many ideas, but I can't begin without acknowledging this collection of extraordinary books that are on the table right now. If you're watching on YouTube, you can see them.
Starting point is 00:04:55 This series of three volumes called Vanishing Asia, these beautiful coffee table books, chronicling the photographs that you've taken over many decades of your experience in Asia and we were just chatting before the podcast, kind of going through them. And it's really quite a spectacular kind of, I guess, history of not just certain cultures,
Starting point is 00:05:24 but also I think in many ways, and I've heard you talk about this, kind of an important document of disappearing, you know, aspects of these cultures that, you know, are quickly diminishing. And part of your motivation being like, I just wanna capture these on film before they, you know, they become something of a bygone era.
Starting point is 00:05:44 Yeah, so thank you for having me. And it's a real joy to be able to share some of this work and other things that I'm up to, but the Vanishing Asia books that you're talking about have been a 50 year compulsion, I guess really honestly what it was to kind of document these vanishing ceremonies and traditions that i have witnessed and i've had sort of the pleasure of witnessing in some cases because
Starting point is 00:06:11 they're no longer being done at least the way that they were and um i again had the privilege of seeing the world at a moment when it was very easy for someone like me with no money to get to these really remote places that had not changed very much. Now they have, they're kind of on a future trajectory like the rest of us. So this was my passion project to document these things. And we did a Kickstarter program to help make copies of them available. And I'm just really glad that I can share it with other people who might enjoy them as much as I do. So your relationship with Asia goes all the way back
Starting point is 00:06:56 to your youth, right? Like you have cut a very interesting and unique path, sort of, you know, orthogonal to, you know, the traditional notions of what a young person is meant to do in order to be upwardly mobile and kind of, you know, headed off into the wilderness with your backpack to explore the world in a very kind of Jack Kerouac sort of way.
Starting point is 00:07:20 Yeah, it was very, maybe say, unintentional. I didn't have grand plans. I was inspired by the Holworth catalog in high school. It gave me permission to kind of invent my life, to invent your own life, that you had permissions. Because until that moment, I hadn't really met anybody who wasn't following the same progression of things, high school, college, work for a corporation. The idea of being able to do something was really not an alternative.
Starting point is 00:07:56 The hippies were beginning to kind of pioneer one of those. And I said, okay, I think I see other people that I admire who are going a different direction. That means it's possible. And I wound up in Asia without the faintest clue about what it was. I'd never eaten Chinese food, never held chopsticks. It was very parochial at the time. And so it kind of just blew my mind in terms of the possibilities and the otherness and the differences were very, very prominent and it was very welcoming and it was very cheap. And so
Starting point is 00:08:32 I had a home there to explore and that's what I did for my twenties basically. Yeah. So it was an extent, it wasn't just taking a gap year. You were there for back and forth for many years. Yeah, right. I did my gap decade. Well, if they'd had a gap year at that time, if that was a thing or even internship, I would have done that and probably gone back to college. But because there wasn't, I dropped out
Starting point is 00:09:00 and I roamed around Asia. And I say kind of tongue in cheek that after a decade or so, I awarded myself an honorary degree in Asian studies because I felt like, okay, now I know something. Yeah, but those experiences were truly formative in your worldview and also in the advice that you now dispense to younger people about what's important, what you should be thinking about,
Starting point is 00:09:30 what you shouldn't be concerned about in terms of pursuing a meaningful life. Yeah, the way I might reduce it now, all those years of experience and travel is in your 20s, try and spend some time doing something that looks nothing like success. That's kind of crazy, stupid, weird, orthogonal, unprofitable, crazy, maybe dangerous. And that experience is likely as unsuccessful as it might look then to become the touchstone
Starting point is 00:10:08 for your success later on. It will become really, really important to you if you're able to do that. Yeah, I did the opposite and regret it deeply. I wish that I had had such a broadening experience in my youth, especially now in my, you know, mid to late fifties looking back thinking, why didn't I do that I had had such a broadening experience in my youth, especially now in my, you know, mid to late fifties looking back thinking, why didn't I do that, right?
Starting point is 00:10:29 But when you're in that moment and all the social pressure and all the messaging that, you know, you're on the receiving end of is sort of incentivizing you to do the opposite. So you really have to, you know, buck tradition. And there's a lot of parental and social pressure that militates against this. So it's not an easy, it's a courageous act.
Starting point is 00:10:53 In fact, the more kind of, the more successful you are in school, the more there is a kind of outward pressure for you to follow that and take advantage of your kind of, your good scores, your good grades. But I actually, my son and my kids went through the same thing, college prep and all this stuff. And they were on their way to college.
Starting point is 00:11:12 They went to college and that was another story. But I did- Were you trying to talk them out of it? I did. It's like the kids always do the opposite, right? Exactly. So my wife is Chinese and so she's, you know, education all the way.
Starting point is 00:11:26 Yeah. And it's the one thing we disagreed about because I was off to the side saying, you know, you don't really have to go to college. But here's the thing is like, if you have an alternative, if you have a program you want to do, if you have a travel, if you have something you want to work on, make it a program for us and we'll support that. But if you don't have that thing, you have a travel if you have something you want to work on make it a program for us and we'll support that but if you don't have that thing you have to go to college and to my surprise all three went to college and it's like that would not have been what i would have done but they went to
Starting point is 00:11:55 college but after that um our son was doing some things he went to get a job and he said look you need to spend at least a year goofing off and doing nothing. Because for your entire life, you've been getting grades and working hard and all this kind of stuff, and you've graduated. It's like you have to goof off for a while. And so he was thinking about getting an MFA in art, and so he decided to get himself his own MFA, to make a little program where he
Starting point is 00:12:25 did art for a year and then wrote a thesis at the end and made a kind of like a PhD project out of it and gave it to his other art professors at school to read and so he'd awarded himself an MFA which I thought that was exactly what he needed to do yeah and that was so good for him in many many ways it's so hard to see when you're at that age, how important that sort of goofing off time is, because it does feel like squandered time when your peers are kind of escalating up the corporate ladder or what have you.
Starting point is 00:12:59 And there's an inherent tension or conflict between the incentives of our modern developed world, which are pushing us in a certain direction to become productive and, you know, independent financially and, and, and contributory. But ultimately, you know, and I'm sure you would agree and you're a product of this, the most interesting people that have the most to contribute are the people that took that less beaten path and went deep into exploration and spent the time in rumination and experience to come out of it more robust
Starting point is 00:13:35 and with a set of seemingly not connected skills that somehow later in life all congeal to make them kind of truly the only person who can speak to a certain issue or an expert in a field that maybe nobody even thought would be. Right. It's really clear right now that the major engine of wealth, and I would also suggest personal happiness is being able to think different. That's the engine. And when we're all connected 24 hours a day around the world with our little devices,
Starting point is 00:14:13 the true value is being able to think a little differently. That's the source of innovation. That's how you make great things. That's how you make great art. And anything that can help you think differently, including AI, which we'll talk about later i'm sure but travel and other experiences doing having uh reading different books than other people read i mean there's lots of ways to do that but you want to really cultivate
Starting point is 00:14:38 that ability to think differently in a world where everybody's connected together all the time. And so I would argue, yes, zig while everybody's zagging, and try and do something different. And travel is a tremendously efficient and productive and inexpensive way to do that. And taking time off, goofing off is another great way to do that. Sabbatical Sabbath is another great way. So that's the assignment really for most people is to have different ideas, to approach things differently.
Starting point is 00:15:17 You're gonna need help doing that. Yeah, and I think that we frame this backwards in the sense that when I have enough money or when I retire or when I have enough money or when I retire or when I have the luxury of time, then I will indulge that instinct to go see the world. When in truth, and through your experience, it's like when you don't have money
Starting point is 00:15:37 and you have tons of time, that's the time to do it. And we think, oh, we can't afford it. But your example, and I don't know that it's that different today is that there are incredibly cheap ways to do this. You can work when you're there, you can work and then go there and live cheaply. And one of the pieces of advice
Starting point is 00:15:55 that I always give to young people that I wish had been given to me when I was younger was have experiences, live lean so that you can have choices and indulge in your creativity and your curiosity because this idea that at 18 or at 20, you're supposed to know what it is that you're gonna do in the world and lock in on that is absolutely ludicrous. Right, yeah, there's so many things about that.
Starting point is 00:16:24 There was, I think Ralph Potts talked about this story, which might've been in the movie, of this guy who's going to Wall Street and he kind of hates Wall Street, whatever, but he's got to make a lot of money. And he was explaining to his friend that he wanted to work for maybe one or two more years so he could have his money, his fortune, whatever it is.
Starting point is 00:16:43 And then he could buy a motorcycle and drive it across China. And we were just, we laughed. The travelers laughed because you could work at McDonald's for a year or half a year and earn enough money to buy a motorcycle and ride across China. It's not a matter of money. It's like, it's the time. And that was the thing that i got traveling when i was
Starting point is 00:17:06 younger with very little money at all was meeting some people on these tours and stuff who had a lot of money and having a guy um tell me that he envied me because i was taking my time you know i was i was on the hiking in the himalayas right and i was going on and they were on a kind of a forest little thing and very controlled. And it's like, he was saying, I wish I could have been you. I wish I did that when I was young.
Starting point is 00:17:31 I wish I had that kind of time. And here's a rich guy. And I was like, oh, I get it. It's like the rich have money, the wealthy have time. And it's much easier to be wealthy than rich because you can control, you have time that it's much easier to be wealthy than rich because you can control, you have time that you're given.
Starting point is 00:17:47 And so that's the thing I started to aim for was that kind of having the wealth of time and control of my time. I'm pretty sure that that story comes from Ralph Potts' book, Vagabonding, and it's lifted out of the movie, Wall Street. It's Charlie Sheen, talking about what he's gonna do when he makes it big.
Starting point is 00:18:09 Right, right, right. In the movie. Yeah, exactly. It's like, you could do that now. You could do that now. Right, yeah. Well, that's a very common trope in our family. My kids sort of have heard it many times,
Starting point is 00:18:20 but very, every couple of years, I sit them down, I have three kids, and I say, I have a magic wand. I'm going to give you a billion dollars. But only if you tell me what you're going to do with it. What are you going to do with a billion dollars? And they'll go through the kind of lists. Maybe they're imagining.
Starting point is 00:18:38 They're young adults and stuff. I would maybe buy a house or something. And I would go on a trip somewhere and I would have this. And I said, okay, you haven't spent any of your money yet. Because in six months, that will, entirely interest will pay back and you're back with a billion dollars. Now what are you gonna do? Oh, well maybe, I'm making something up.
Starting point is 00:19:00 Maybe I'll start a little shop selling, or I wanna do a little day selling knitwear or I want to do a little daycare center or whatever it is. And it's like, okay, you don't need a billion dollars for that. So this idea, I mean, most people's dreams are not a matter of, they're not gated by money. They're gated by other things. And it's very clear in my own experiences that that dream of wanting to work to have the fortune to do what it is, that's a really convoluted and unnecessary way around
Starting point is 00:19:37 getting what you want your dreams to do. Right, but it's the byproduct of a culture that we live in that is sort of pushing this narrative or incentivizing this notion that extreme wealth is the path to happiness and that material accumulation and comfort and luxury are the keys that will unlock that thing that's missing in your life
Starting point is 00:20:03 and fill that desperate hole in your soul. And it's only through stories of people who have explored that to the very nth degree and have reported back. And despite their reporting back, we still don't believe them. Exactly. That's how powerful this messaging is.
Starting point is 00:20:22 Right, right. I think that it demands some extreme counter-programming in the form of kind of your experience and the experience of others who can report back that, in fact, after you meet your base needs through income, it's not that money isn't important, that it's truly experience and broadening your horizons and finding a way to contribute
Starting point is 00:20:49 and link your life path to some kind of purpose or meaning that extends beyond your ego and kind of self gratifying instincts that you will find the happiness that all those other false promises fail to deliver on. Yeah, I've had, again, the honor and privilege to hang out with some billionaires. And what's remarkable is that they're still asking themselves
Starting point is 00:21:15 what they wanna do when they grow up, right? Which is good. Which is good, right, exactly. But it's just saying that their billions actually haven't really helped them do that alone. In fact, it's added another burden. It's become another job. It's another whole set of things that they have to overcome.
Starting point is 00:21:37 I mean, having a billion dollars is something you overcome. And it's a real issue about thinking about your kids and what impact it has on their kids, which is very, very strange in many ways. And so I've concluded, this is not in my advice book, but this is a piece of advice I have now, which is my advice is if you can all help it, do not earn a billion dollars. Okay, please do me a favor.
Starting point is 00:22:01 Kevin, I'll try to avoid that. Exactly, you'll be much happier. Do not earn a billion dollars. Well, the real calculus is, is the money that you're earning creating freedom for you? Or is it creating a more calcified prison for yourself? Because it can do either of those things. I'm sure there are billionaires
Starting point is 00:22:19 who've been able to figure out how to create freedom out of that for themselves. Maybe not, I don't know. I don't even know that I know any billionaires, but you do. But if your wealth can provide that, then okay. But if it's just creating misery for yourself, then what's the point? Right, and so that level of getting what you need
Starting point is 00:22:41 comes way before a billion dollars, is what I'm saying. Sure, of course. And so at that point of a billion dollars, it is a burden. It is something that really weighs on the people who have it. It's kind of like fame. It's my advice, which is you really don't want to be famous either if you just read any biography about a really famous person. It's another type of imprisonment. And most of the people who are really, really famous,
Starting point is 00:23:06 really regret that that is because they have to deal with it all the time. And it's a real, what's the word again? Hinders them in many ways. And so it's not freedom at all. And it's the same thing. So you really wanna to focus on, this is my favorite piece of advice from the book,
Starting point is 00:23:29 which is don't aim to be the best. Aim to be the only. Right. And that only is where you'll be much more satisfied, happy. You'll probably have enough. And that is the route. The Billions is another person's success. That should not be your success.
Starting point is 00:23:54 It's someone else's movie if you're trying to make a billion dollars. You wanna go, you wanna be the star on your own movie. Sure, but explain that a little bit more because the idea of being the only is an intimidating prospect, right? Like how do you become the only? The only at what?
Starting point is 00:24:13 And I think becoming the only at anything, even if it's the most obscure thing on the planet does require again, back to what we were talking about earlier, kind of being contrarian or cutting against the grain and doing things a little bit differently. And I don't know that everyone is sort of cut out for that. Yeah, so first of all, it is a high bar. It is a very, very high bar.
Starting point is 00:24:38 And the second thing in my experience in both my own life and looking at other people, it will take most of your life to arrive there. There might be the really weird, freakish person who's born and has a clear idea of what they're really great at that nobody else can do. And they go for it. But most of us, it's a long and meandering, winding road with lots of detours and right turns and setbacks and turnarounds and everything else to arrive there and you actually don't ever arrive. You're always on that journey of trying to figure out what it is about yourself that is special and unique.
Starting point is 00:25:18 But it doesn't... And okay, so there is a paralysis I've seen in young people. It's like, I don't know what I'm passionate about. I don't know. And so, I can't really start. I can't keep my 100% until I know what that is. And I've become convinced that the proper way to start is to master something. And in that mastery, that becomes a platform
Starting point is 00:25:47 that you begin to kind of move towards discovering. Passion is a product of action. Exactly. It's not the other way around. Exactly. And so waiting around until you're struck with what you're passionate about as a precursor to action is the way most people think about it.
Starting point is 00:26:01 And that just leads to paralysis like a protracted period of confusion. Exactly. So you almost, and it doesn't matter where you start because that's not where you're gonna be ending. And that's true. Again, if you look any remarkable person that you admire, they didn't start there.
Starting point is 00:26:20 They arrived at there. And the more kind of distinctive, unique, special, and only they are, the more likely they started way away from where they actually discovered what they were good at. And so don't be concerned about where you're starting. As long as you're moving forward in that way of really deliberately trying to get better, you'll arrive in the right direction.
Starting point is 00:26:43 Yeah, I think age and wisdom really gives you clarity on this perspective though. When I look at your life, obviously when you headed out to Asia at a young age, or even when you were working at the Whole Earth Catalog and founding Wired, like none of those experiences could have created clarity that you would be this thought leader
Starting point is 00:27:05 and futurist and pontificator on everything. I don't even know how to qualify what it is that you do, but what you do is very unique and you are a one of one, right? And all of those experiences assembled to produce this individual who has a certain perspective that has value that nobody else has.
Starting point is 00:27:27 And in my own experience, I've done a number of things that have led me to this place, none of which I whiteboarded or predicted or kind of scoped out or set as a goal. They're the byproduct of trying different things and failing and all the like. What I find though, is that it's very difficult to penetrate the mindset
Starting point is 00:27:45 of a 20 something year old person with this. That perspective almost has to be earned. And an example of that is I had Rainn Wilson on the podcast like a year ago, who is, he was an actor on that show, The Office, whatever. He was like, 20s are for fucking around. Don't even worry about it.
Starting point is 00:28:05 What are you guys so stressed out about? You're supposed to be, you're supposed to go out and fail and like, who cares, right? And that, like we shared that video and it went crazy viral and it was a pretty close 50, 50 split in like how people responded to it. Like people saying amen on the one hand and a lot of people being like,
Starting point is 00:28:24 you don't understand my life. Like, how dare you? I can't afford this. That's a very privileged perspective. And you know, I would, and not to be, you know, like I'm sensitive to people's varying, you know, socioeconomic conditions, et cetera. I don't know people's lives, but you know,
Starting point is 00:28:43 I think the wisdom of that still holds true. But I think it's my point being that, et cetera, I don't know people's lives, but I think the wisdom of that still holds true. But I think it's my point being that, for a lot of young people, it's threatening to hear that. It's hard to hear that, to like step into the idea that that might be a possibility is scary. And it does require kind of grappling with certain realities. And so I guess my question is like, is it different now?
Starting point is 00:29:08 I mean, we live in a different time now than when you did it. Is it harder to do that now? Is it still possible? Like, how would you speak to that young person who had kind of, you know, a strong, visceral, negative reaction to that type of advice? So I too am sensitive and I'm thinking not just of the people in this country,
Starting point is 00:29:30 but the people all over the world, in Asia where I spend a lot of time, where this is a very real thing and they have far more constraints on their lives, even than say a typical American in terms of their parents and their expectations about what they do and stuff. So the way I would say that is a first pass is that if it is all possible for you,
Starting point is 00:29:53 the more you can do that, the better. So I would say, yes, there are gonna be people whose lives do not allow them that luxury. And that's unfortunate. And that's something we would like to change that's something that for me what prosperities brings which is that we have more choices uh and you know for most of the people in the villages of the asian countries that i have spent in they had even fewer options than that they they were if they stayed in the village they were going to be the farmer. They didn't have any chance to become the only.
Starting point is 00:30:27 So yes, I think it is privilege in that sense. But what we want is we want to spread that privilege around to more people. But if you have any chance to do that, in some ways you're cheating us by not taking advantage of that. That's why you do art. You do art in part for yourself but also because because you owe it to us for that's that's the deal and the deal meaning that you're alive and you have this chance and you have a genius that nobody else has and if you can share that with us we all benefit so So I would say, yes, it may not be
Starting point is 00:31:05 that everybody can take advantage of it, but it's still true to the extent that it is possible for you, it'll be better for you and for the world if you did. Mm-hmm, yeah, well said. Good advice and that's just one kind of slice out of the greater slices of advice that are coming out of this new book that we've mentioned,
Starting point is 00:31:30 but haven't explained, which is your newest book, "'Excellent Advice for Living." This is sort of, this is a really interesting book because on the one hand it's incredibly simple, but every idea that shows up on every page, the more you consider these very concise phrases, the more profound you realize they are. It's almost a tweet storm, right?
Starting point is 00:31:53 This is like a tweet thread in the form of a book, condensed wisdom over the course of your life, you know, offered up in very digestible form. You can open it up to any page and just consume one thought and think about it for the day. So talk a little bit about like why you decided to write this book and what your kind of intention for it is.
Starting point is 00:32:18 Yeah, so I began the book, not with the idea of making a book, but I have been in the habit of writing down bits of wisdom into a little compact proverb of some sort to help me remember it so I can repeat it to myself so a piece of advice I picked up at Whole Earth from one of the editors, Ann Herbert, was, this was, I don't know, 40 years ago. She said, look, you know, whenever you are invited to do something into the future, like to have a meeting, to go speak somewhere, to have coffee with someone,
Starting point is 00:33:05 to make a presentation, ask yourself, what I do if it was tomorrow morning? And that was like, oh man, that was so useful. That was so, so powerful because that's what I would do is I would get an invitation to do something. I said, that's really great. But wait, wait, wait, wait. Would I wanna do this if it was tomorrow morning? Right. And then-
Starting point is 00:33:21 If it's far enough out on the calendar, like I'll agree to anything. Exactly. But soon enough, it'll be tomorrow morning. You're like, I don't really wanna do this. So this future projection was very, very useful. And so I reduce it to that little thing and I would repeat that to myself.
Starting point is 00:33:38 And there was another piece of advice that I learned from, I don't even remember where, cause these things come and go, but it was if I lost something in my household and I don't even remember where, because these things come and go, but it was if I lost something in my household and I couldn't find it, and then I finally found it, my flashlight, whatever it was, and I would go to put it back, the piece of advice was,
Starting point is 00:33:55 oh, no, no, don't put it back where you found it. Put it back where you first looked for it. Because that first impulse, next time you're going to look for it, that's where I'm going to find it. So I repeat that to myself. And so I decided I was writing these down and then I decided that I should keep doing that for my kids
Starting point is 00:34:14 because I have three kids and our style of parenting was based on my experience, which was I didn't really pay much attention to what my parents said. I paid a lot of attention to what they did. So that was our style, which was, we didn't preach or even give advice to our kids very much. It was all through what we did. But as writing these things down, there were like a lot of them that I actually wished I had known earlier. So I was like, well, it's time.
Starting point is 00:34:47 We should give some of those, write them down and give them to the kids. And so that's what I was starting to do. I was starting to write down things, first for my son, to things that I wished I'd known earlier in these little compact tweetable things. And that's how it began. And I put out 68 of them on my 68th birthday and I shared it with my kids and they loved it. And I then shared it with my greater family and went out from there
Starting point is 00:35:15 and it kind of ricocheted around the internet and did its thing. And I was encouraged to do more of them and I kept doing them on my birthday. And then there was a point in which they were kind of scattered all over the place. And I thought they needed to be in between covers so they could hand it to a young person. And that's the origin. Yeah, it's great. We talked earlier about don't be the best, be the only.
Starting point is 00:35:44 That's certainly one entry in here. And I wrote down a couple that kind of stuck out to me. I mean, they really are like, these just really, they on the surface, like really simple thoughts, right? Don't keep making the same mistakes, try to make new mistakes. Okay, well, what are you actually saying there, right?
Starting point is 00:36:03 Like we should go make mistakes. Like you have permission well, what are you actually saying there? Right, like we should go make mistakes. Like you have permission to fail and you should fail. It's only problematic when you keep doing the same thing over and over and over again, right? Right. I love this other one. Productivity is often a distraction. Don't aim for better ways to get through your tasks
Starting point is 00:36:22 as quickly as possible. Instead aim for better tasks that you never wanna stop doing. Right, and that's something again, I took me a long time to kind of realize that cause you'd read all these productivity books and stuff and getting things done and all that kind of stuff. But no, no, actually what makes me happy
Starting point is 00:36:40 is spending inordinate amount of times, things never getting done so I make one piece of art every day I did that for last year you're sharing that the AI art that you share but I'm but I spend an incredible amount of time it's like I'm not trying to reduce the amount of time I'm trying to increase the amount of time I spend that because I just enjoy it so much right another one is and this gets into kind of segueing into the broader picture of who you are. Over the long term, the future is decided by optimists.
Starting point is 00:37:14 To be an optimist, you don't have to ignore the multitude of problems we create. You just have to imagine how much our ability to solve problems improves. Right. So, you know, this gets at the heart, you know, kind of the core of like who you are as a human being, you know, your many things, but most notably, you know,
Starting point is 00:37:37 sort of the reductionist term that gets associated with you as futurist. I don't know how you feel about that. But, you know, that is the thing, like you have this capacity to communicate and understand the present moment and how it relates to the near and short-term future. And you have the facility to kind of communicate
Starting point is 00:38:03 around that, that is rare and I think instructive. But I think, as we've already demonstrated, it's broader than that because you have lived this very broad life, well-traveled, deeply considered, clearly somebody who has devoted copious amount of time to pondering questions, big and small. And from that, you know, extracting what the short and long-term future will hold
Starting point is 00:38:30 and how to deploy our attention, what we should be worried about, thinking about, what we shouldn't be worried about, et cetera, all of which, you know, is sort of condensed and consolidated in this latest book. But the overarching theme to all of this is this unbridled optimism. And this is something that I personally struggle with, especially in our very current moment
Starting point is 00:38:54 where things seem to be happening quite rapidly. And from my perspective at times, spinning a little bit out of control. So talk me through your perspective of, let's talk about the current moment first before we get into future casting. Like, how are you making sense of what is actually happening right now?
Starting point is 00:39:15 We're right in this moment of chat GPT-4 being introduced to the world and there's a global conversation that's occurring at a very broad level around artificial intelligence, what it means, what it portends, what it's doing for us, what our fears are, et cetera. So, where are you at with all this?
Starting point is 00:39:36 Yeah, so I think you're right to kind of pinpoint the optimism as a pivot. And I think, I would say that I am generally, by temperament, genetically proposed to, you know, to be optimistic. But that actually, I have actually, but also think optimism is something you can learn, particularly as a child. And I have actually deliberately become even more optimistic as I get older.
Starting point is 00:40:09 And where does that come? As I said, there's a natural temperament, but it comes from other places. I don't spend much time trying to predict the future. I am trying to predict the present, just to figure out what exactly is happening, going on right now, and to really look at that. And I think that's half of the present just to figure out what exactly is happening going on right now and to really look at that and and i think that's half of the half of the present is just getting this understanding what's happening right now and um but i but i have noticed over time that um optimistic views tended to be more correct than not.
Starting point is 00:40:47 So the people who... So the more I became interested in the future, the more I would read the past and become interested in history, which I hated in high school. I just had no interest whatsoever. I was turned off by it. I just didn't see it. But as I started to travel more, as I started to have to work in the technology, which was changing so fast, the more interested I became
Starting point is 00:41:07 in history, the more I read it. And now I just mostly read history. And what I get from that, and my own experiences traveling in these undeveloped and developing places, was the acknowledgement of progress. Acknowledging that there actually has indeed been progress, material progress as well as moral progress over time. And that when we, when people talk about this current time and the craziness of it, I have to say, you can only say that if you have no idea of history and how crazy things, how crazy politics were in the US,
Starting point is 00:41:46 say in the 1890s or whatever. It was, we just, it's almost beyond belief. I mean, think about the fact that there was a vice president who shot his political upholstery. It was like, and then went back. Right, like dueling. Yeah, like Hamilton.
Starting point is 00:42:00 It was like, that's like, that's totally, you know, I mean, and we disagreed to the point that we were killing each other during the Civil War. So in that sense, history gives a little bit more perspective to the current problem. That's all I'm saying. It puts it into perspective to say, well, actually, we've had periods like that in the past and what happened. So that sense of history and progress i think informs a lot of how i look at things and um that's a fundamental um orientation so so the little
Starting point is 00:42:36 heuristic that i have that i play in my head which is that if we can create one percent more than we destroy every year, then we can have progress because that 1% can accumulate. That's the genius of compounded interest. If you can have 1% betterment, so 1%, if we can be 1% better than we are bad, that's all we need. And that little tiny bit is kind of invisible in the world. 1% difference in betterment, it means like 49.5% of everything is crap and terrible and maybe
Starting point is 00:43:17 harmful. So it's really hard to see that little difference of 1%. But we can see it in retrospect if we look behind. And I had the privilege of being in a time machine and going back to the behind, going back to where we were and really living there in a feudal time with feudal relationships and feudal technology and very little. And so I know deep down where we've come from and how far we've come. And that sense of like, well, here's what we get.
Starting point is 00:43:48 Yeah, we got some new problems, but man, we also have some new good things. And it's so much easier for us to imagine going forward the problems and all the ways in which it doesn't work. That's entropy. It's harder to imagine the one or two ways things do work. The way things work are much more improbable than the way things break. That's again, that's the rule of entropy. That's the second law of thermodynamics that's never been broken,
Starting point is 00:44:18 which is that the ways in which things cannot work and degrade are far, far vaster than the few ways that it can work. Because things that work are more improbable. And part of what we want to be is we want to be improbable beings. Another way we could say, like, don't be the best, be the only, is be the most improbable person possible. Okay? And so that improbability is biased to things breaking down,
Starting point is 00:44:46 to there being more ways we can imagine things not working and the difficulty in trying to imagine ways that do work. And that comes back again to why we should be optimistic, which is that because it's so hard to make things that work, they really work inadvertently. We have to kind of imagine that they could work and to believe that they could work in order to make them work. So there were a bunch of people who saw the Star Trek communicator
Starting point is 00:45:20 and said, I want to make that. I want that to be real. I believe that that could be real. There are lots of ways in which could not work. And there were many tries, the Newton and others that failed because it was easier to fail than not. That's the general rule. But there were people who really believed that that would work and it'd be good. And so that belief, that imagination of imagining what it is that we want and believing that it's possible is those are the people who make the things.
Starting point is 00:45:51 And so in the end, when we look back, all the good things we have were made by people who believed that they were possible and believed that they could be made. And so going forward, that's going to be the same is that it's, if we want to have a society that really works and is full of technology, we need to imagine an optimistic view of it
Starting point is 00:46:15 and believe that we can actually reach that. I'm completely with you on the belief that we need to have a sense that the world can be better and aim our intentionality and our hard work in that direction. But on the subject of like the world becoming 1% better or kind of looking around and seeing what's happening, is that not a factor of the lens
Starting point is 00:46:43 through which you choose to perceive the world? Because you can look at the Star Trek communicator and then look at the iPhone, et cetera, and celebrate that. Or you can look at the accelerating rate of species extinction and climate degradation and the widening gap between the haves and the have nots and the sort of, to your point about understanding history, looking at the history of past empires
Starting point is 00:47:14 and the rise and fall and the arc of these and to try to identify where the United States might fall on that arc. I think you might agree that we perhaps were on the decline of that. And what does that mean in terms of the global sort of power structure and where we sit in the world and what we should be focused on and whatnot.
Starting point is 00:47:37 So there's a choice of perspective that comes into play here that sort of tempers my ability to know, my, you know, ability to just jump on the optimistic bandwagon. So you're right. And my perspective is decidedly in this respect, not American. Sure. There are 350-
Starting point is 00:47:59 Which is healthy. There are 350 Americans between China and India alone. There's 10 times the number of people. Increasingly, what they think about the world will matter more than what we think about the world. So I look at the world in a kind of a global perspective and I think that's where we're moving to. And that's part of what the US is still struggling with
Starting point is 00:48:16 is this idea that it is a superpower and it doesn't want to be a globalist. I mean, this is sort of like the ultimate insult right now, which is crazy to me because look, people, this is sort of like the ultimate insult right now, which is crazy to me, because look, people, this is where we're headed to. We're headed to a planetary economy. We're going to have planetary governance. That's the direction that we have a planetary machine right now.
Starting point is 00:48:37 So on average, globally, if you do the Obama test, which is you're going gonna be born at some year and a random assignment of sex and gender and place in society, what year do you wanna be born in? There's not a year in the past. That's a better deal than right now.
Starting point is 00:49:00 That's the Steven Pinker kind of notion of this is the best time to be alive. This is the best time. This is the acknowledgement of progress. And so, yes, there are new technologies. The more powerful they are, the more powerful problems they will produce. Right, explain that a little bit.
Starting point is 00:49:18 Like this idea of fracturing the binary between we're either moving towards a utopia or a dystopia. Yeah, so that is, and by the way, almost every single really good science fiction movie about the future is dystopian because they just make greater stories. I mean, the people writing stories
Starting point is 00:49:38 really know how to tell stories and they're mostly about this dystopian world. There's very, very few about a world that you would want to live in on this planet. And that's a problem for us. But that choice of, and then utopias, we just, first of all, I don't think they're a good idea. I don't think we'd want to be happy there and they're impossible. But this idea of no other choice.
Starting point is 00:50:03 And I coined a term called protopia which is this idea that we can have a world that's a little tiny bit better just going back to the little delta incrementally a little bit better we kind of creep towards betterment over time it's not problem free in fact the problems are the propulsion for progress. And that world of inching forward where we have new problems as well as new technologies, that's the protopian world. And I think that is, that's a more achievable destination.
Starting point is 00:50:43 Part of that definition is also driven by human agency and choice, right? That's a more achievable destination. Part of that definition is also driven by human agency and choice, right? Of course. That's the thing. Like we are creating this in real time, right? I wrote a book called The Inevitable and the inevitability that I talk about in the book, which is technological inevitability,
Starting point is 00:50:59 is that things like once you invent, once a civilization anywhere in the galaxy invents electrical wires and things, they'll begin to invent electric motors. And once they have electrical signals, they'll have radio. So it's like there's a sequence of things which will inevitably come up.
Starting point is 00:51:18 Once you have electrical networks, you'll have the internet of some sort sooner or later on your planet. Now, and the ai will come so the ai coming is sort of inevitable but what's not inevitable is the um character of the ais and they're plural there are many types and um who owns them how are they governed is international or national is a commercial or non-commercial? Is it non-profit? Is it open source or closed? There are so many different attributes that we have to decide and can decide and have the choice to decide, and all those choices make a tremendous difference to us.
Starting point is 00:51:56 So the AIs are coming, but the character of the AIs in the system is entirely our choice collectively. So we do have tremendous choices, even in the fact that this stuff at the large scale is inevitable. But that choice can only drive so much of the landscape of consequence in the sense that there will always be, especially with these emergent technologies,
Starting point is 00:52:23 a landscape of unintended consequences. And by their very nature, they are unintended, despite our best efforts to curtail them or to instill them with a certain ethic that we think is going to be the best for humanity. In addition to the unintended negative consequences, there are also unintended benefits. And that's sort of what we see happening right now
Starting point is 00:52:48 with GPT and the image AI generators. And that to me is a thrill, is that lots of things that they're doing, the inventors of them had no idea that they could do. Right, and within 24 hours, there are use cases popping up that nobody thought of. Nobody thought of. In one day. Right, and within 24 hours, there are use cases popping up that nobody thought of. Nobody thought of. In one day. Right, and that's the thing I discovered
Starting point is 00:53:09 in researching the history of technology is that the, first of all, several things. One is simultaneous independent invention is the norm because the idea of the heroic inventors is totally wrong. It's a Hollywood invention. But the idea is that is that um ideas and inventions are networks of things there are many many parts and if someone didn't invent it someone else right behind them would which means that um um also the inventors of them don't know what
Starting point is 00:53:42 they're good for i i tell the story in one of my books about Thomas Edison, who was an inventor of the phonograph, the wax cylinder, recording things. And he, that evening or day or whatever it was, he made a list of all the things he thought this device would do. And the number one thing, which actually he did, and some of it has been recovered, was to record the last words of the dying for prosperity. So this was like a really freaky thing.
Starting point is 00:54:11 It was like, oh, in the future, you could hear Aunt Albert talking. And he made other ideas. And number 10 down at the bottom was you might be able to use it for music. Right. So he had no idea. Yeah, no concept of use case.
Starting point is 00:54:26 And the only way we are gonna figure out these new technologies, particularly as they're complicated is through use, it's through use. We can't think is the term I call thinkism. Thinkism is this idea that we can decide and figure out things and have solutions and solve stuff by thinking about them. But it's action, It's using them.
Starting point is 00:54:45 These technologies are so complicated, they have to be used in action for discover both the plus and minuses. And that's what's happening right now with the AIs is that we've been talking about it for 100 years, but it's not until we actually use them every day that we begin to see what they're good for and bad for. And so this idea, there's unintended negatives,
Starting point is 00:55:09 but they're also unintended positives as well. Right, so within days, as we said, of GPT-4 going online, there's tweet threads about use cases that nobody thought of before. Somebody figured out that you you know, you could automatically have a lawsuit filed against a robocaller, you know, like these amazing, like very, like, you know, specific, small use cases that are kind of amazing, you know, writing out in handwritten
Starting point is 00:55:38 notes, an idea for a website and then chat GPT, just creating that website in minutes or whatever it is. But also we have, you know, Meta launching or sort of, you know, introducing its AI and somehow that immediately finds its way to 4chan when it was supposed to be guardrailed, I guess, within the ecosystem of Meta. Like there are these things that humans do, right? Like whether they're malevolent or just chaos agents, you know, that despite our best efforts to guard against,
Starting point is 00:56:15 find a way, right? And so protecting against that causes concern. And I think the deeper concern perhaps, or the thing that I think myself and other people find alarming is just the pace at which this is happening. It's a dizzying pace, that it's occurring so rapidly
Starting point is 00:56:37 that we don't have time to even get our footing before there's a new announcement and a new leap in technology. And that gives us the sense that this is getting away from us a little bit. Yeah, well, it's not getting away from us, it's barely, I mean, if you get into the actual thing, we can unplug these at any time, we can set back.
Starting point is 00:56:55 There's very few things that are completely irreversible. And that's one of the myths about AI is somehow it's irreversible that it's gonna unleash and then it can turn around and kill us. And that's just Hollywood again. It's a good story. But there's absolutely no evidence whatsoever in any direction that it's on a runaway exponential curve.
Starting point is 00:57:14 In fact, it's the opposite. So the idea would be that exponential curve, we get it going and it's unstoppable, it becomes kind of a superpower. But if you actually um there is no exponential growth in the intelligence part of it exponential growth is in the resources that we consume to make it so it's actually the inverse of exponential growth meaning it's requiring more and more compute power to produce a little bit of a gain.
Starting point is 00:57:46 So the gain is not increasing exponential. The gain is pretty linear, but the number of resources required to make that increases exponentially. So it's actually self-limiting in that sense right now at this time. Yeah, I mean, we should distinguish between narrow AI, generative AI, and then the kind of thing, the sexy thing that's out there on the horizon,
Starting point is 00:58:13 which is general AI, which we're nowhere near at this point. Right. But we're in this uncanny valley in the way these chatbots speak to us that create the sensation of humanity or some level of consciousness that's truly just an illusion.
Starting point is 00:58:29 So it is unsettling in that I guess. So let me acknowledge it is unsettling, it is moving fast, but it's not out of our control. And then secondly, what we're discovering is something more fundamental, which is not that this thing is getting so much smarter. It's just that we're realizing that many of the things that we held as being highly elevated to achieve turned out to be fairly mechanical.
Starting point is 00:58:54 I can assure most people you're not going to lose your job. You may lose your job description. And the task in your job may change. Some of the tasks may go away, but your job is unlikely to go away. So far, I found only one kind of job, one person, one kind of job who's lost to AI. Everyone else is going to change it. But we have time if we pay attention to it. What we don't want to do is to prohibit or outlaw or ban these things
Starting point is 00:59:26 because then we don't get to steer. We can only steer these things by using them. And so there is a tendency to want to, well, I want to stop it, I want to halt it. I don't want this thing to happen. And that means that, no, no, you're not going to get to steer. The people who actually do use it somewhere are going to be the ones steering it because we can only steer it through using it finding out how it works and the the idea of um general intelligence artificial general intelligence i think is is is is again it's a myth it's it's not even real we have no evidence of such a thing because so far the kinds of intelligence that we make are very, very specific, very, very narrow.
Starting point is 01:00:10 And we have really no idea how our own brains work. We're just projecting all kinds of things about us. And I think one of the greatest things that the AIs are going to help us is to help us become more human and better human. And I'll give you an example, the simplest one, which is we train these generative AIs
Starting point is 01:00:36 on all the stuff that all humans have written, all the content, all the books, all the writing, all the literature. And it's sort of like the average of humans. Okay? And it turns out that the average human behavior is sort of racist and sexist and mean. And we're outraged. It's like, we won't accept that. It has to be better than us.
Starting point is 01:00:59 Okay, we can put in ethical, moral guidance into these because it's just code. That's pretty easy. So we can easily code in ethics to these AIs and we can actually give them better than us. But the problem is that we humans don't know what that looks like. Does it mean like you're woke? Is it like being super woke?
Starting point is 01:01:24 Is it something else? What does that look like to behave better than us or at our best? Where do we get that consensus? Who is the who? Who is the us in this? Who decides who the us is? There are all these questions. But if we can arise and come there, so this is what the best human or the better human would look like and then we can code that into it we could make them and then that would help us, ourselves become better
Starting point is 01:01:54 because it's like our children would make us better people, okay? We want our children to be better than us and so we articulate the best behavior and they can maybe help us change our behavior so in the effort to actually try and come up with ethics that are consistent in deep moral guidance that is elevated for us, we have a chance to become better humans. Sure, but that is a larger problem than I think people understand because there is no one,
Starting point is 01:02:35 in the same way that there is no monolithic AI, there is no monolithic ethic. There is no one singular value set that should drive this. And the ethics or the values that are important to the creators of a particular AI may not very well match up against the value set or ethics of another group. So that's a difficult problem to solve, of course.
Starting point is 01:03:00 And those, whatever is instilled into that AI in terms of values and ethics is gonna dictate results. So that's a sticky wicket for sure. So you may have people favoring one over the other. So it's like the old trolley problem for the self-driving car. I'm gonna use this AI for this and this for this. Well, no, it's like, okay, you have a car.
Starting point is 01:03:22 So should the car prioritize the safety of the driver or the pedestrians? Right. I mean, we get down to thought experiments that are- Those are all philosophical problems. Like trolley problem. But now we have to answer them. We can't just wave our arms and say, we don't answer it.
Starting point is 01:03:35 We actually have to answer it. And so people will say, no, I'm gonna buy the car that prioritize the safety of the driver over the passenger. They'll make an answer. The answer is we're gonna favor the safety of the driver over the passenger. They'll make an answer. The answer is we're going to favor the safety of the passenger over the pedestrians. And you can buy that one because that's our ethics here. So we don't, again, we have given ourselves a pass. The kind of inconsistent ethics and morality that we have when we're driving, because we don't know who we favor,
Starting point is 01:04:06 we don't have that luxury with the AIs. We actually have to make a decision which will force us to understand that our own ethics are very shallow and inconsistent. So we have to better one. And there'll be, as you said, there'll be competing ones. And which ones do people favor?
Starting point is 01:04:21 Right. And there are regulatory and legal issues that are implied by that, right? Like, are you really enabled to make that decision that your car gets to favor you over the pedestrian? Who gets to make that decision, right? That's a problem. I think the other wrinkle here
Starting point is 01:04:40 is that makes people a little unsettled and maybe you can speak to this, is this notion that the AIs, even as they currently exist, the creators of them can't tell you how the decisions are being arrived at. And I think that that freaks out a lot of people. It does. It does. And if we can't understand that, and we're only at the inception in terms of the power and capacity of what these tools are gonna be able to avail us in the coming years, that's disturbing as we tiptoe towards a more general AI version of what we're now seeing.
Starting point is 01:05:17 Yeah, I think you're right. This sort of unexplainability of the AIs is disturbing. It's interesting that we aren't as disturbed that the humans that we deal with can't explain things either for some reason. We accept that, right? We accept that. That's perfectly fine, but I demand,
Starting point is 01:05:31 this is again, going to say, we demand our creations to be better than us, okay? But secondly, to that point is there are efforts right now. It's a whole field of AI study called explainable AI to actually have it explain it. And what it does is it uses another AI that's built to reach in and to try to explain what the AI is doing. Interesting.
Starting point is 01:05:54 And that is of course the genesis of consciousness. But isn't it so complicated that even should that secondary AI succeed at that, the manner in which it would communicate to a human being that even should that secondary AI succeed at that, the manner in which it would communicate to a human being would be so reductive as to be not necessarily even accurate. Right, and that's exactly the same problem humans have. Why did you do that?
Starting point is 01:06:18 Right, well, we're all clouded by our emotions and our biases and all that kind of stuff. By the way, just wait till we add emotions to this, which is in the next couple of years, because a lot of people think, you can't really have emotions until you have consciousness and awareness and stuff. No, again, emotions is something that's very primitive,
Starting point is 01:06:37 like the other things we're discovering. You don't need very much. Our pets have emotions. We can put lovingness and being loved and all these things into very elementary machines. And they're gonna really spook people because it'd be like her. There'd be people who bond to these things
Starting point is 01:06:58 in an emotional way. Like Megan. Like Megan. And so it's like, it's gonna be very, very complicated. But the thing- And yet the optimism within you remains undaunted. Because they are like our children. It's like, yes, and we are like gods.
Starting point is 01:07:17 Okay, we're gonna make these beings that have free will and choices. And we hope that they surprise us with amazingly good stuff. But the price that we're willing to pay is that they may do something harmful. Okay, that's, you can't really, you can't generate anything really great without that possibility of going the other way.
Starting point is 01:07:44 And so are we willing to pay the price to unleash these kinds of entities that can actually generate new things? And I think we want to minimize that harm. And so how do we do that with children? We train them. We instill them with values. We try to move forward our own values into it.
Starting point is 01:08:07 And so we are going to train them. We don't want to restrain them and say, no, no. The fact that you could do something wrong means that I'm not going to let you make a choice. I'm not going to let you create anything new. And so that's what we're doing with these machines. In order for them to generate new things that would be useful to us with us, we want to train them.
Starting point is 01:08:27 Sure. And so we want to minimize it. We're not gonna eliminate it. I got you. But on this idea that we are the gods, we are unleashing this new technology and we are training it and it is learning. What happens when what it is learning
Starting point is 01:08:43 is how to self-improve itself. And to do that with extreme exponential rapidity to the point that almost instantaneously, it becomes the God and we become the subject. And the notion that we can simply unplug it becomes an impossibility. So I'm with you all the way except for the exponential part. As I said, there's zero evidence of that.
Starting point is 01:09:09 In fact, it's the other way around. It can have thinkism. It can work in its brain, but it doesn't have what it needs, which is what we are gifted with, which is a body to interact with the world and have impact. So yes, it could work in its little mind and go round and round and get a little smarter,
Starting point is 01:09:26 but it's stuck inside a circuit somewhere to actually have impact, to actually take control, to actually have effect on the world. It actually has to be connected to the world in some ways. It has to interact. But isn't it connected to the world by dint of being wired into the internet? That doesn't, what's it gonna do?
Starting point is 01:09:46 I don't know. What couldn't it do? Well, I mean, it's like it's connected right now. Its impact is the fact that other people are reading it. Humans are translating that into the world. They're doing it. They're putting out whatever it is. And so in the sense that it needs us, it still needs us.
Starting point is 01:10:04 The idea that it can be independent is um is an abstraction that just um it's very it's a fantasy that we can imagine but it it doesn't have any bearing in the actual real world and the same thing with exponential we can imagine exponential growth but there's no evidence at all of anything being exponential in the real world right now. And to have something that gets smarter and smarter through its own training, to have some effect on the world, it has to do something in the world. Like what's it going to do? What's this brain going to do?
Starting point is 01:10:40 Is it going to take over the drones? Well, how does it take over the drones what's the what's that mechanism i mean it's it's going to like put itself somewhere i mean it's all it's a fantasy right i mean it it it reminds me of i think it's is it nick bostrom who has the paper clip paper clip the paper clip thought experiment it's just it's it's it's a it's a it actually like, it is a religious fantasy, this belief that the thing could become like our God. It's like the cargo coat. It's like, well, it's gonna solve all our problems.
Starting point is 01:11:15 That's the Ray Kurzweil idea. And then we can solve cancer. Well, the thing about it, that's thinkism. This idea that you could solve cancer by thinking about it, just by reading papers. You could read, you could have the most genius AI in the world right now, and if it read all the papers about cancer
Starting point is 01:11:32 that has been published so far, it would not be able to cure cancer. It still needs to do experiments. There's still stuff we don't know. This idea of, so Ray and people like to think, and they think, well, if you have a really lot of thinking, then it can solve problems.
Starting point is 01:11:47 Thinking is one part of solving a problem, and it's probably not the most important part. This idea that you call dumb smarting, right? That's another problem, which is that these AIs can be really, really intelligent in one area and completely idiotic in another. And I think our frustration is going to be, like we see with chat GPT already,
Starting point is 01:12:10 is like, how can you be so dumb while you're so smart at the same time in another direction? And that's because the engineering maximum is we can't optimize everything. There's always trade-offs. Every organism alive today, there's no general purpose superior organism. There's no organism that's better than any other organism because they're all being trade-offs for particular jobs. And so if you're really, really fast,
Starting point is 01:12:36 then you're probably not very nimble. If you're really powerful, you're not going to really be efficient. And so there's just trade-offs. And the same thing in intelligence. There are a lot of dimensions to it it's not just a single one and there's going to be like to be really really good at translating or image generating you're probably not going to be good as good as something else that's made for it over here and so this idea of a general intelligence. It's ignorance of the fact that we have no idea
Starting point is 01:13:08 how our own minds work. We don't have access. We can't explain ourselves. And yet we've come to understand that things can still be useful even though we don't understand them. It's interesting because as somebody who is a techno optimist, I would think that you would anticipate or expect
Starting point is 01:13:29 that general artificial intelligence is an inevitability, but you seem to be saying like, this is not possible or in the event that it arises, it's not gonna come in the form that we fear. I think the picture you wanna have in your head of thousands of different species of AIs, plural, many of them being created to perform certain functions, like say doing mathematical proofs,
Starting point is 01:13:57 which will be amazing to help us do proofs of scientific stuff. Okay, they're gonna be engineered to do that and we'll work with them to do that. But the picture of rather than a kind of a superhuman godlike thing is to think about these as artificial aliens, like Spock. They may even be conscious at some level.
Starting point is 01:14:17 But although most times consciousness is a liability, we don't want your self-driving car to be conscious because it's distraction. You don't wanna be worried about whether car to be conscious because it's distraction. You don't wanna be worried about whether it's- That consciousness is a liability. It's a liability. You want it to be like a binary functioning machine.
Starting point is 01:14:32 And yeah, and these things are also not binary. They're gradations or there's little bits of things including consciousness and intelligence. But the way to think of them is like Spock, which is meaning that they can be very, very bright about certain things, but they're not human-like because they're built on different kind of substrate.
Starting point is 01:14:52 And that's their benefit, is that they don't think like us. And that's true of the generative ones we see right now. They can imitate us in a bland way, but that's not useful to us. And we detect it. We are already sensitized to it. It's like, no, that sounds like a GPT.
Starting point is 01:15:09 So I call them the universal personal intern. Right. And it's embarrassing to release the intern's work. You want to check their work. You want to work with them. They're going to always be available for doing all kinds of things. But it's a cooperation. It's a partnership. They're co-pilots. They're interns. They're going to always be available for doing all kinds of things. But it's a cooperation.
Starting point is 01:15:26 It's a partnership. They're co-pilots. They're interns. They're assistants. And we're going to use them like we navigate with a GPS assistant, a navigator. We have an assistant librarian who searches the web for us. Now we have the interns who help us create things. And that's the current state.
Starting point is 01:15:43 But what we're going to is these artificial aliens, which are really smart, but they aren't, if they're really, really smart in some dimension, they're unlikely to be smart in another dimension at the same time because it's a trade-off. Because it's engineering. Like what would be like the, what's the super organism on the planet?
Starting point is 01:16:07 Organism to beat all the other organisms. It's like, that's the nonsensical question. You were saying, what's the intelligence that would be superior to all intelligences? It's a nonsensical question. Right, I mean, we like to think as human beings that we are the most, you know, adapted, advanced, but, you know, are we any more adapted
Starting point is 01:16:30 to our environment than the cockroach? You know, you've spoken about this, right? Like we need to, you know, recalibrate how we think about these things and then apply that mentality to how we're thinking about AI. So in that, you know that kind of strain of thought, would you say that this is a base level,
Starting point is 01:16:49 like emergent life form or intelligence? Like, how are you thinking about that in the more kind of like sci-fi sensibility? Sure, sure. Like, there is this idea that we as human beings, like the caterpillar to the butterfly are here to evolve and that the evolution, you know, will ultimately be, you know,
Starting point is 01:17:14 to transform into this new life form at some point, which will not necessarily be carbon-based and maybe silicon-based. So I wrote a book called, What Technology Wants, which was primarily trying to ask this question of, what are the general directions in the evolution of technology? And just as a spoiler, my view of,
Starting point is 01:17:37 my theory of technology is that it's an extension of the same forces that run through evolution in life. The same self-organizing forces are working through evolution and evolution is basically biological life accelerated that is kind of like attempting to create forms that you could not get to with wet tissue. So there's all this space of all possible things that you need to have a mind help you make.
Starting point is 01:18:08 The mind came from what biology, but can make these things that we could not get to, that the biological evolution would not get to by itself. And so what's the general directions? And one of the direct general directions is that we constantly will specialize, that we make the first life as a general purpose cell. And now in our own bodies,
Starting point is 01:18:29 we have like 52 different specialty cells. We have skeletal cells, heart muscle cells, we've specialized. And that's the general pattern is you have the first camera that did everything. Now you have specialized cameras, high-speed cameras, underwater cameras, infrared cameras, high-speed underwater infrared cameras.
Starting point is 01:18:49 We just kind of go in that. And same thing with cues and AIs. We're going to have a general kind of general purpose thing and then we'll make these specialized versions of them over time. Right, like the general or the AI as it exists now being a single celled organism as opposed to a nerve cell or a, yeah, exactly. And so, and then the question is,
Starting point is 01:19:12 the big question that I have really no opinion about is whether we humans will speciate. And it's very possible that with genetic engineering that there will be people people say the Amish, my friends, the Amish who will decide under no circumstances will I or any of my descendants ever modify their genes. And then you have other people it's like, yeah, tomorrow.
Starting point is 01:19:36 I'm gonna- Get me into that thing. Take away the Alzheimer's gene from my and all my descendants. The Brian Johnson's out there who are quantifying themselves to the nth degree. Right, but I'm removing, really Parkinson's gene. I don't wanna have it in me or my kids, just take it out.
Starting point is 01:19:54 And so over time we might have two different or more species, we don't know. And then the AIs are, again, as I said, artificial aliens. And we'll make more of them and they'll work alongside of us. And that's, to me, the reason why I'm optimistic is I believe that there are problems that we have right now, both in science and in business or culture, that our own minds may not be able to solve. It's like quantum gravity, whatever it is. Maybe our human minds by themselves
Starting point is 01:20:32 can't solve that, but working with minds that we make, we may be able to solve. Sophisticated interns. Sophisticated interns and co-pilots. We together, we can figure out some of these things to solve. And that's, so I look at a future that's filled with thousands of different kinds of AIs. Maybe there's, maybe we speciate or not, who knows. And that that world is, so there's not this sort of one big godlike AI that's kind of afraid of, but instead we have this, they're like machines.
Starting point is 01:21:02 We have, we don't have one big machine. We've got a lot of machines. And it's like, how do you feel about machines? Well, which machine do you mean? My dishwasher. The dishwasher or the car or whatever it is. How do you feel about AI? That'd be completely ridiculous.
Starting point is 01:21:15 It's like, well, which AIs are you talking about? So I think, and that kind of, that's a protopia. Not utopia, that's a protopia. Yeah. Not utopia, it's a protopia. Right, protopia, the product of human agency and innovation with a optimistic bent to it towards progress and a positive better world. The pro comes from proceeding forward, the progress prototyping and the pro versus the con.
Starting point is 01:21:48 So there's lots of that. And the idea is that, yeah, it's like now, but a little bit tiny better. Right, a little bit. I think there's a misguided sense that people could get that your perspective on technology is steely and cold, but in fact, it's quite the contrary. I think you have a very profoundly spiritual relationship
Starting point is 01:22:16 to all of this that I think is fascinating that I'd like to explore. And maybe a way into that is something you mentioned earlier, which is the fallacy of the heroic single inventor, this idea that the Einsteins of the world or these singular people that we look to who pioneered certain technologies and kind of putting the lie to that myth
Starting point is 01:22:44 and understanding that there are kind of tectonic plates at play here where if that person hadn't done it, there was somebody right on their heels that could have done it. And the idea that these sorts of ideas are kind of percolating in the collective super consciousness of humanity and maybe even broader than that.
Starting point is 01:23:08 So talk a little bit about that cause I think this is super interesting. Yeah, so there is scientific evidence and academic evidence that this idea that most inventions, the norm is to be invented at the same time independently by a number of people. And when you look at it, it's shocking the numbers that will come up with simultaneous inventions in the past. And of course, that's why we have a patent office today is to kind of adjudicate that.
Starting point is 01:23:34 Because multiple people, even today, are inventing the same things. And of course, the other thing that's happened recently, I'd say the last 150 years is that there's very few single inventors, almost all great things have teams of people necessary at this point. And so- Solo genius idea is, yeah. Solo genius and even the solo villain
Starting point is 01:24:01 is a Hollywood trope. This idea of the person in the cave or on top of the mountain who's got all this technology that works on the first time and it's like they're by themselves. It's like, come on, right here, you've got five guys just trying to keep the IT going for this place. And so that solo thing is just wrong.
Starting point is 01:24:20 We're a very communal thing and we're becoming more mutualistic as we go along as a society. We're much more dependent on each other for everything. And that's my idea of the technium, which is that even technologies require other technologies to live and operate. And that that system of all the technologies connected together has an agenda itself, has an impulse or a tendency. That's the technium. And I think that this idea that it is where a much more mutualistic society and technologies
Starting point is 01:24:54 are is important for us to understand. And then one of the, I think it works against one of the worries that people have of the rogue villain, the individual who can unleash smallpox or a bioengineered weapon or other things. And it's the fact that these technologies are becoming more complicated that that actually is even beyond an individual to do. Again, it's kind of a fantasy idea, but in my research, Again, it's a kind of a fantasy idea, but in my research, the power of individuals to do harm has actually not increased through technology because the technologies constrain that because they're much more mutualistic and social. You just need a lot more people to get things done.
Starting point is 01:25:38 So that's, again, another reason for optimism but i do want to say one thing about the what we'll call the spiritual component of technology my um my understanding of both the origins of life and going through the unveiling and unrolling of life on this planet and this creation of minds i think first of, that it's happened a zillion times throughout the universe. I take it for granted that there are other planetary civilizations and they have something kind of a similar origins and growth. But that the basic trend, the basic the basic trend the basic arc
Starting point is 01:26:27 that we're going through that we're following we that we're part of so that we're part of something that began at the Big Bang and is running through us and will go on beyond us
Starting point is 01:26:36 and technology is the kind of current form of that that we're involved in and what it does what it does, what it gives us, what technology gives us, this cosmic technology from the Big Bang through us, is increasing choices and possibilities.
Starting point is 01:26:54 I mentioned earlier that, you know, a farmer, until a couple hundred years ago, most people didn't have much of a choice about what they did with their lives. They were constrained by the undevelopment to be farmers or maybe a farmer's wife and to be a mother. And those were the only choices that most people
Starting point is 01:27:14 from most of human history have had. But we have discovered this new invention called the scientific method which unleashed a whole bunch of new possibilities that did not exist before. And we're the benefit of that today you and i we're doing things that 100 years ago nobody would think would be a job no one would think would you could survive on doing and in the future there'll be people doing things that we would not believe would be possible today. And I think what that means is that the story I like to tell is to imagine Mozart having been born before anyone had invented the piano or the
Starting point is 01:27:53 symphony or anything like that. Say he was born 2,000 years ago. His musical genius would be totally lost on us. We would never get to share it. And that's a shame to us and to him. And then there's imagining Van Gogh being born before he invented oil paints or Hitchcock or Lucas before he invented the cinema. So each one of these, our inventions have enabled that genius to be
Starting point is 01:28:22 to flower and to be shared, benefiting both us and them. And that means that today, somewhere in the world, there's a Shakespeare who has been born and she's waiting for us to develop the technologies that would allow her genius to be shared and enjoyed and benefiting us. So we have a moral obligation to keep inventing these things and the moral obligation to get the primary technologies of clean water and education and everything else that will also enable that. And so for me, we're on a grand journey of trying to open up the possibilities to allow that every person born and yet unborn
Starting point is 01:29:06 would have a chance to develop their genius, to become the only, and to share that with us as a benefit. And that's the big story that I think we're about. And then, you know, in that process, when people were making things, they're making something new new and they can seem like they're just worked into the capitalistic consumer business
Starting point is 01:29:30 of making something new that doesn't work. But in fact, they're taking part in this great arc of trying to open up the possibilities of the universe to the people on this planet. Mm-hmm. Yeah, it's a beautiful sentiment. You can't help but think, well, I have two thoughts. You can't help but think of what genius was lost
Starting point is 01:29:51 or squandered because we had not developed the appropriate outlet for the expression of that particular genius over the course of human history. And then secondarily to that, this notion of collective consciousness of almost a hive mind, right? Like we're all participating on some level in the gestalt of forward motion
Starting point is 01:30:19 in the generation of these new technologies without really understanding the context or the broader kind of macro role that we're playing. We're like ants, like, you know, moving along as we're digging our ant hill or whatever, but we have no awareness of, you know, the broader game at play. And yet it is sort of unfolding naturally
Starting point is 01:30:43 as if there was some sort of divine plan at play and yet it is sort of unfolding naturally as if there was some sort of divine plan at play or greater intelligence that we're consciously unaware of. Absolutely. And I would like to add one other spiritual dimension to that maybe coming back to the advice book. And that is that I think at the heart of my advice is the fact that this long arc this generative thing of increasing possibilities of the self-organizing dynamic that it has another attribute which is a paradox
Starting point is 01:31:14 at its heart and that for whatever reason the way the universe kind of works is that it's generous at its foundation in terms of producing things and its abundance and its improbability. And that generosity is captured in this paradox of our own human situation, which is that the more you give away, the more you get, which makes no logical sense whatsoever. But it's so reliable that you can live your life based on that. And that sense of being able to give away, knowing that you'll get it, is the foundation of all artistic creation
Starting point is 01:31:57 and the best habit to have because you need to produce a lot. You need to give away a lot and keep making things in order to make something great. You have to make a lot of bad stuff. And if you are confident that there's more where that's come from, you can keep giving that away. And that's how you arrive at your understanding of yourself.
Starting point is 01:32:18 It's a generous outpouring that you can rely on that. You can rely on the fact that people are going to treat you well if you assume the best of them you can rely on things getting better because we're trusting the future generations so there's there's a sense in which um a lot of my advice about how to behave is based on this premise that at the heart of this long arc in history of creating more possibilities is a generosity that we can count on. Sure, and I think that's beautifully articulated
Starting point is 01:32:56 and I'm certainly somebody who has experienced that myself and would agree with you wholeheartedly. I think with respect to the piece around innovation and technology, where it becomes sort of problematic or challenging is around the idea of whether these innovations are extractive or regenerative or sustainable. Right?
Starting point is 01:33:27 And we have a long history of producing innovations that seem to benefit us, but ultimately long-term are too extractive to be sustainable and are wreaking havoc on our planet. And it creates this tension between progress and this ticking clock where we are quickly depleting the resources of our planet and not respecting it adequately. And so the question becomes, can we pivot away from extractive technology to at a minimum sustainable technology or perhaps more laudably regenerative
Starting point is 01:34:02 technology? So where does your mind sit around how we make that pivot? Because I think we're living in a world in which the systems that we have created have erected a misalignment of incentives that drives us towards the extractive model. Yeah, this goes back to, I think we have a lot of choice and particularly in the politics of things that is a choice that we can make or not make.
Starting point is 01:34:31 And I think we have not yet made any technology that we can't make greener, more appropriate, but that's a political will choice, that's a choice. So technologically- Political will is a big sticky wicket problem. Technologically, we know a lot of the solutions are. There's two kinds of problems. In my mind, there's the tractable problems,
Starting point is 01:34:54 problems that we know how to solve, but just have to choose to, and then problems that we have no idea how to solve. Most of these climate ones are in the first category where we know what most of the solutions are. And so that's the will, a political will of choosing to do those. We know that if we electrify the current existing energy system, we can consume exactly the same amount of energy.
Starting point is 01:35:23 But electrify it, we can reach 50% of our climate goals by electrifying all the stoves, electrifying all vehicles, electrifying all heating, electrifying all transportation, electrifying everything. Just that alone will get us halfway there to the current goals. And so we know how to do that. Electric cars, all kinds of heat pumps. And it's the will, the political will to do it.
Starting point is 01:35:54 So I would say that that's a category one problem, which is good because it means that we know how to solve it. Right. We have the technology. We have the technology. We have the solutions. But it almost, it makes it more frustrating that we can't implement those technologies. Like we're in our own way. Right, exactly. Because of the systems
Starting point is 01:36:16 that we ourselves have erected that are preventing us from taking advantage of knowledge that's accessible already. Right. And so one of my bits of advice from the book is that you can't reason someone out of a opinion that they didn't reason themselves into. So the thing about it that we're kind of confronting
Starting point is 01:36:40 is that people, unlike say the AIs, are not just very logical. They often, they're very emotional. is that people, unlike say the AIs, are not just very logical. They often, they're very emotional. They arrive at things for not logical reasons and they inherit views. They have cultural standards and norms that they absorb even unconsciously. It's just, so we're very, very, very complicated
Starting point is 01:37:06 and we have to kind of operate at other levels to change our minds. And people who like me, who like to think, think that if we can change how people think, they'll change their minds, but that doesn't work very well. No, and we have the added problem of people being motivated by their own self-interest.
Starting point is 01:37:30 So again, incentives. Yeah, and that's true, that's human nature. We're gonna, so you have to, yeah, you have to make it work. And that's, we're seeing some change in electric cars. And so electric cars, the reason is, is that they're just better cars. Forget about everything else. They're just superior cars in every way.
Starting point is 01:37:50 And that alone may help them come about and become the norm. Right, but then we have the downstream extractive practices in terms of mining and minerals, et cetera, to create these batteries that, you know, it's sort of like for every new solution, there's a new problem that we have to address. Absolutely. And my Protopian viewpoint acknowledges
Starting point is 01:38:16 that most of the problems we have today are caused by the technologies of the past. All the problems that we're gonna make in the future are gonna be made by the technologies of the past. All the problems that we're going to make in the future are going to be made by the technological solutions that we have today. And so you say, well, you know, what's the point? Well, the point is, is that actually we keep increasing the possibilities and choices that we have.
Starting point is 01:38:39 Okay, that's what we get out of it. So I agree with the technological critics who say that we keep increasing the number of problems. But where I differ is I think the solutions to the problems made by technology is not just personal virtue. It's actually new technologies who themselves will have new problems. But that is the problems that propel,
Starting point is 01:39:08 their problems are just opportunities in disguise. Right, I mean, theoretically, I'm in agreement with you. I get tripped up a little bit by the ticking clock of environmental degradation. Like how much time do we actually have to solve these problems before we eclipse a certain point at which the global kind of climate crisis becomes so untenable as to be irreversible.
Starting point is 01:39:36 Like that's, we really are up against that right now. And so there's a certain urgency to this that I think needs to cattle prod us out of the theoretical mindset into the truly practical application mindset. Right, yes. And that unfortunately is going to be a very hand wavy deadline because of our ignorance,
Starting point is 01:40:05 particularly at the planetary scale. One of the things that we discovered, a very hand wavy deadline because of our ignorance, particularly at the planetary scale. One of the things that we discovered, not just through climate, but if you ask any question of the earth, of our society at the level of the planet, the answer is we don't know. And I was involved in trying, it was a failed attempt to try and do a survey
Starting point is 01:40:22 of all the species on our planet. Because we don't know. We don't know to, I don't know, I would say to maybe 50% of what the actual numbers. We haven't even identified all the living species on this planet. Which is like crazy. And so if you ask any kind of question, the one we know about the most is population. And even that, I think we're 10% off either direction. And we're constantly revising, even now,
Starting point is 01:40:52 the projections of our own human population. And one of the things I am concerned about is the coming population implosion after we reach our peak. And we don't have agreement on when that is, but it's probably within less than 50 years. And so, and that's the thing we know the most about at the planetary scale. And so our ignorance about our own planet,
Starting point is 01:41:16 what's going on and what's happening right now is phenomenally great. And that's one of the first places that we should be working on to stabilize the climate. Not to mention all of that being exacerbated by a denigration in the global conversation and a sense of decorum and how we problem solve as we move towards
Starting point is 01:41:45 what many consider to be this post-truth world that's being exacerbated by social media algorithms and information silos that are making communication and problem solving more difficult, which is of course another unintended consequence of technological innovation. That's right. And so, yeah, I mean, we're moving,
Starting point is 01:42:09 whether we want to or not, whether we acknowledge or not to becoming a more planetary society. And that disturbs people both on the left and the right. They go crazy over this idea of a global governance. And yet we have a global planetary problem that requires global cooperation. This is not going to happen otherwise. And so that, to me, that's a new phase of our civilization that we're moving into.
Starting point is 01:42:37 This planetary wide level of whatever it is that we're making. We don't even have names for it. level of whatever it is that we're making. We don't even have names for it. And that is truly a frontier for us as a species. That only happens once in a planet's life when you have this knitting together of a planetary civilization. It only happens for the first time once. And so that's what we're moving into.
Starting point is 01:43:01 And I think we don't have good language, good vocabulary. We don't have good notions. We don't have a good role model. It's a truly a frontier. Yeah, but unbridled optimist that you are. Right. Undaunted. It's gonna be fantastic.
Starting point is 01:43:18 It's gonna be amazing, right? Yes. Okay. Because the alternative is what? Right, Okay. But, you know, I don't know. Maybe I'm injecting a little, try to a little realism into this. I don't know.
Starting point is 01:43:31 I don't wanna, I don't wanna like, you know, rain on your parade. I wanna be an optimist. I'm like, Kevin, help me become more optimistic. Not everybody can be an optimist, okay? Because we're in a speeding car, as you mentioned earlier. And in order to turn, you have to have brakes. There have to be
Starting point is 01:43:47 some people who are braking it to be able to turn. So you're the guy saying it's going to be amazing. I'm the engine, and I think we need to have an engine that has to be more powerful than the brakes to keep going forward. So I'm glad that there are people who are trying to brake it, because we need
Starting point is 01:44:04 them, okay? But my role is to keep making the engine go more powerful as we can go forward. I got you. Okay? I got you. All right, so you're talking about the global sort of conversation that we need to have
Starting point is 01:44:18 to solve these big problems. I wanna take that down to the very local. You mentioned the Amish earlier. This is a community of people that you've spent quite a bit of time with. You're sporting an Amish beard. Did the Amish beard come before your immersion in the community?
Starting point is 01:44:37 Yes, before. It did, why is that? When I grew a beard right after high school. Just style? The mustache just drove me crazy. I just could stand it. So I shaved it off and then I just, when I grew up, right after high school- Just style? The mustache just drove me crazy. I just could stand it. So I shaved it off and then I discovered, oh, the Amish do that.
Starting point is 01:44:51 Maybe they're my brethren. And that's your thing. But then later on, when I became more interested in technology, I became more interested in the technology, I mean, the Amish, and I went to start to visit them. The first time when I rode my bicycle across the US,
Starting point is 01:45:03 I would visit them and I was had one major question which is how do they decide what technologies to use or not and it was a it's a very interesting conversation because they have trouble articulating that it's so culturally kind of embedded that that they haven't really thought about it in kind of a curious way. But that was my main question. And I, and I decided that they had some lessons for us outside about our own use of technology.
Starting point is 01:45:36 So that's a, a culture that's a mystery box to most people. We look at it and you know, we're, we're curious. We sort of glance at it like we're glancing at a car accident as we're driving down the freeway without really understanding what's actually going on. But you have spent a lot of time with these people. And what's fascinating is you've extracted certain principles around living that are instructive.
Starting point is 01:46:04 So talk a little bit about that. From the Amish you mean? So the Amish, okay, so there's a stereotype of the Amish is they don't use technology, which is incorrect. They use technology, but they have, they're very careful in selecting which ones they use. And they're always, not always, but they are gradually changing that mix and then thirdly
Starting point is 01:46:28 the mix of what the amish use is really governed parish by parish sect by sect decentralized so so so it's not uniform and the ones that are most liberal would say about their use of technology are the ones in the heartland where the Amish kind of were centered and began. And some of the more stricter ones are at the outer edges of that upstate New York and Indiana. And so it varies. But the general principle that they use is they have two main criteria for deciding whether to adopt technology in their lives.
Starting point is 01:47:08 The first one is, will this technology help me to strengthen my family? And the goal or the evidence for that is they want to be able to spend breakfast, lunch, and dinner, every meal with their children until they leave. That's their goal. So that means they have one room schoolhouse nearby and the kids come back for lunch. It means that they do business in their farm in the backyard
Starting point is 01:47:35 or they have a little shop in their backyard. That's their ideal. And if they have technologies that help them do that, they'll use it. So I have some old order Mennonite friends with a horse and buggy, bonnets, suspenders, the whole thing, but they have in their barn in the back, they have a CNC milling machine,
Starting point is 01:47:56 computer-controlled milling machine running on electricity from a diesel, and the 14-year-old girl in the bonnet is running the CNC machine, okay? Because it keeps them on their family. And the second criteria is does it help and breed our community as a community? So the reason why they have a horse and buggy is that the horse can only go 15 miles in any direction.
Starting point is 01:48:20 So all their shopping, doctors, whatever it is, it has to happen within 15 miles. So they keep everything in that direction. So all their shopping, doctors, whatever it is, it has to happen within 15 miles. So they keep everything in that community. The priority, the real focus, the locus of that decision-making is, is it making our community stronger, more interconnected, more intimate, or is it fracturing it? Exactly, and so when a new technology comes along,
Starting point is 01:48:43 they have Amish early adopters. And there are usually guys and they'll have like cell phones. And they'll say, I know. They're like beta testers. I need a cell phone for my business. And so the Bishop says, okay, Ivan. Interesting. Okay, Ivan, you can have a cell phone,
Starting point is 01:49:00 but you've gotta keep it in the shed. Wait, you can't have it in your house. And you have to solar charge it. And we're going to be watching you and your family to see if this is making you stronger as a family man. And if you're actually more contributing into the community. And if you're not, you have to be willing to give it up. And so Ivan tries it out and um
Starting point is 01:49:28 they say oh they discover something about the phone is that his wife wants one because her sister is now been moved away to a place in indiana and they want to keep the family in touch. So basically the Amish are adopting the flip cell phone. Interesting. Because all the evidence so far has been that it strengthens their ability to have communities move apart and live in different areas because the land gets too expensive in one area. And the family business is better for that
Starting point is 01:50:06 so they can keep in the backyard. So they're saying yes to the cell phone, flip cell phone. Now, Ivan has a new smartphone he's wondering about. Right, like that becomes very sort of challenging quickly because obviously you can go on the internet and perhaps learn more efficient ways of farming that will increase your yields or other tools that could help make the community stronger.
Starting point is 01:50:35 Or you can go down a Twitter rabbit hole and spend all your time staring at your screen all of a sudden. And the bishop has heard these arguments. And so they have Amish computers now, which are computers that just do spreadsheets. They don't connect to the internet because they discovered that spreadsheets are really handy if you're running a business.
Starting point is 01:51:01 And then they do have, some of them are experimenting with online where they have like parental controls that are public and shared and whatnot so they can go to certain sites or they've been using as public libraries so i've been this guy gave me his card for his website and it's like if he's making barbecue, metal barbecue stuff, I said, an Amish website? Well, I just get it at the library. I go to the library to pick up my mail, whatever it is.
Starting point is 01:51:35 So it's out of our home and it's at the public library. So that's one solution. Right. That's fascinating. And what comes to mind for me in thinking about this is again, another tension, the tension between the solutions to our biggest problems, lying in technological innovations
Starting point is 01:51:55 and really investing in that. And on the other, on the flip side of that, a hearkening back to a simpler time. And when you think about food systems and the impact of factory farming and monocropping and soil degradation, et cetera, we're seeing this emergent movement around regenerative agriculture, right?
Starting point is 01:52:20 Which is in some ways a throwback, right? It is a recognition of a more ancient practice that is more beneficial for the planet. Right, right. That is kind of exciting in terms of how we're rethinking how we feed the planet, et cetera. I'm not sure it's something that can scale to the level of feeding everybody on the planet,
Starting point is 01:52:45 but I can't help but think how cultures like the Amish, the Mennonites, et cetera, could participate and be at the kind of forefront of these types of movements. I feel like they have a lot to contribute in that regard. The Amish are really big in dairy farming because no ordinary English farmer, they call them, wants to bother with this. Too much effort, too much hand labor and the Amish still have a lot of big families and the young kids are instrumental in their workforce and they're still doing dairy.
Starting point is 01:53:16 But I joke with them that I think eventually they're going to accept the robot milker, I think eventually they're going to accept the robot milker, which are amazing and would enable them to continue expanding their dairy business as the other English farmers to give it up one by one, which is happening very fast because it is very, very labor intensive. So I think that might be, again, kind of a weird little thing where some high-tech stuff in AI robotics can help the Amish. There's a thing called precision agriculture that's enabled by AI. So if you imagine kind of like a big tractor with outstretched arms,
Starting point is 01:53:53 it goes down the rows and rows of lettuces, and there's a camera, camera eyes in all the little rows. And using AI, the tractor tractor we'll call it, can identify individual lettuce seedlings by number. It can recognize it. Oh, this is your 2152A, I was here yesterday and it can give the exact, it can appraise its health, give you the exact amount
Starting point is 01:54:25 of water and nutrients or whatever it needs based on that individual seedling using gps and everything remembers it and so it has millions of these that's tending individually wow that is something the human farmer would love to be able to do, but can't. And yet this precision agriculture machine, which is being driven by AI, and that reduces the amount of pesticide, fertilizers and water needed per plant, because it's just giving exactly what that plant needs. Interesting.
Starting point is 01:54:57 That's how AI can transform this kind of agricultural revolution that we want. Yeah, wow. And did you say that that's something that the Amish are interested in? I can't imagine. The Amish are not interested in that. The Amish are interested in this milker,
Starting point is 01:55:16 the robotic milker. I see, okay. Because that's their main cash right now for most Amish farmers is the dairy business because it is very labor intensive. It's working with animals which they love and it needs the kind of hands-on stuff. But the thing about milking cows twice a day, so here's the cows actually decide when they get milked. That's the beauty of it.
Starting point is 01:55:35 They're not being forced. It's the cows are happier because they decide when they want to relieve and they come in and get milked automatically and move out. That's like, everybody's happy. Cows are happier. There's more milk. And the farmer doesn't have to get up at 5 a.m.
Starting point is 01:55:50 and go on vacation every once in a while. That's huge. You know what would be better? What's that? If we just all stopped drinking milk. We don't need to. Well, yeah. That's another discussion.
Starting point is 01:56:00 That's another thing, yeah. Switching gears a little bit. You're somebody, and again, going from local now back up to global, you're somebody who spent a tremendous amount of time in China, kind of we're coming full circle here, in Asia and in particular China. You visited China many, many times.
Starting point is 01:56:19 My wife is Chinese. Your wife is Chinese. China is very interesting right now, what's happening there. And I feel like it's a culture and a place that the Western world doesn't fully understand. So help me understand what we need to know about China. Maybe where our thinking is sideways on this,
Starting point is 01:56:45 what we should be focused on, what we shouldn't be worried about, et cetera. Wow. I mean, it's a big question, obviously. That's a big question. But like, you know, China is, you know, is- What should we think about the US? So first of all, I would say several things.
Starting point is 01:57:01 One is I would have had a more confident answer just three years ago before COVID when I was going, living there constantly. I felt I had a pulse on the country and I felt that right now I feel blind because I haven't been there and things are changing very fast. Since before COVID.
Starting point is 01:57:19 Before COVID. I was there right before COVID my last time. And so something has shifted COVID. I was there right before COVID my last time. So something has shifted. And I don't have a good sense of what that is right now. So I would say, I would kind of preempt with that, that I feel less confident about it. The thing I would say,
Starting point is 01:57:37 and it's still true, is that almost anything you can say about China is true somewhere in China. Right? I mean, it is so vast. There is more diversity in China than within the U.S. between California and Maine. I mean, it's really vast. But one of the things that people don't appreciate
Starting point is 01:57:53 that I will mention is that part of the genius and greatness of the U.S. was it's built on an immigrant experience of people coming from all over the world, bringing together mix up and mash up that kind of hybrid vigor of produced by having people from many different backgrounds try to contribute and make and be unleashed and that is happening in China and it has been happening but the immigration is all internal. So you have people from very disparate,
Starting point is 01:58:25 from Yunnan, from Qinghai, from Guilin, coming together and they speak indecipherable languages to each other. Except they have this common language, not English, but Mandarin. But at home, there are languages and backgrounds and traditions that are completely foreign to each other. And so they're all coming into cities, the young people, mixing it up and having this immigrant hybrid vigor
Starting point is 01:58:52 that we have experienced in the US. And that was really what was going on for a very long time, I mean, for the past 10 years. And that has been really very, very productive and tremendously energetic. And you have cities like Shenzhen, where most of the stuff that we're using for electronics is made.
Starting point is 01:59:11 Shenzhen in the 80s, okay, so that's maybe 40 years ago, 40 years ago, was a fishing village. And now it's a city bigger than New York City. And it means that every single person in that city, nobody was born there. None of the people in Shenzhen,
Starting point is 01:59:31 13 million, 15 million people were born in Shenzhen. They're all immigrants. They're all coming people out. And it's the youngest city in the world because everybody's young there. So it's the youngest, hippest city in the world they built brand new opera house and library it's just it's a brand new city the size of new york and scale of new york that's brand new so there is no matter what happens in china politically
Starting point is 01:59:59 there is a momentum and a hustle and an ambition that i don't think is going to be squashed by no matter what happens whether they overthrow things i don't know but um i'm just saying that there is a huge um desire a huge collective moving forward that's not going to be stopped. And I don't know where it's going, but I'm saying the pressure, the volume, the intensity of that is hard to appreciate, but it's not going to be stopped by whoever's president or whatever the party's in power. It can certainly be diminished or demodified, whatever,
Starting point is 02:00:44 but it's really a billion people on the march. And we have to kind of acknowledge that. And so I think that's one thing I would say. The other thing is I asked a lot of the young people in China constantly who their heroes are, what their dream was besides getting rich. And the answer was zero, zero. No heroes and no dreams of what they're going to do. So they're racing at a thousand miles an hour forward,
Starting point is 02:01:16 but they don't really know where they're going. That's very strange. What do you make of that? So I would go into dorm rooms, which in the U.S. would at least have posters of things, something that they like, they're hoping for, you know, bands or something cool, whatever, something that would give some idea of what their interests were. There are none of those in China. It means that they're ripe for a new religion. There's no holy scripture.
Starting point is 02:01:51 There's no constitution. There's no sacred texts. There's nothing to guide them. So that means one, it's a possibility that could be some weird thing, a belief thing that would go through that many people would follow. It means that I think they're hungry for a vision of where they wanna go.
Starting point is 02:02:09 There's been a whole thing of the China dream now, trying to get it going, but I don't know what it is and no one else does either. So I would say that they're ripe for a belief in something bigger than themselves, but it hasn't been articulated yet. Right. for a belief in something bigger than themselves, but they hasn't been articulated yet.
Starting point is 02:02:27 Right, and is it your sense that that belief when it arises would sort of challenge the governmental structure or create a situation that pits the people against. That's if we'd have to make a bunch of different scenarios and that could be certainly one of them. Here's one thing I would say about that. I would ask these young people in the big banquets and stuff
Starting point is 02:03:00 maybe they'll be drinking a lot. So it's good some honesty. So like what's one thing that you all agree here on? What's one belief? What's one thing you could all agree on that you want? And they said almost in unison, stability. The cultural revolution and the insanity of that is still close to their parents went through it.
Starting point is 02:03:28 And all kinds of stories, all different ways. That is so vivid. It's like, we'll take almost anything, but we don't want another revolution. Right, give me a job, something stable. They don't want another revolution. Right, give me a job, something stable. They don't want a revolution. So they're not having a revolution. Right, no revolution.
Starting point is 02:03:51 No revolution. Yeah, fascinating. Yes, it's so interesting. And I appreciate what you had to say about the complexity and diversity of this, you know, massive landscape of a billion people. But I can't help but think of the implications of a talent pool, 1 billion people, you know,
Starting point is 02:04:15 deep in a culture in which they've already, you know, outpaced the rest of the world in terms of manufacturing quality, of course, and, you know, what comes next with that and where we're gonna be in 10, 20, 30 years. Yeah, so they have great power and they don't yet have a great dream. I mean, yeah, so anyway, I mean,
Starting point is 02:04:34 that's true for many countries, but China's moving so fast and has such a force and a momentum that could be dangerous or it could be wonderful. When you cast your gaze forward as a futurist with finger quotes, what does the world look like 20 years from now, 10 years from now? Do you have a sense of that?
Starting point is 02:04:59 Like this is your jam, Kevin. Yeah, so I would say several things. I would say it's gonna be more things will be the same than change. So I think 95% of the world in 20 years now will look like today. And I think most of the changes are not gonna be in the physical world.
Starting point is 02:05:23 That revolution has already kind of mostly been completed. It'll be much more than tangible thing of how we understand who we are, like this AI stuff, changing our beliefs about ourselves and how we connect with each other. So I think it'll be kind of more in the social realm. I think the thing after smartphones are the smart glasses, but I've been saying that for way too long.
Starting point is 02:05:45 Right, I mean, you were a big VR guy and you- Well, in the 80s. Yeah, so your timeline on that. Yeah, exactly, I've been wrong for so long. You've been right about a lot of stuff. I would say you were wrong about that. Yeah, right. So take that, I'm still wrong.
Starting point is 02:06:03 And even lately with Meta's big bet on VR and AR and the Metaverse and all of that. And then as soon as chat GPT hit, like nobody cared about it. Exactly. So I've been wrong before on that. But that's actually, I mean, that's still my answer about what comes after the smartphone.
Starting point is 02:06:19 And I think that I'm hoping that most gas cars have been replaced and I'm hoping that most gas cars have been replaced and I'm hoping that we have electrified at least the developed countries by then. And I hope that if we're lucky, we may have the very first glimmers of fusion, synthetic solar, which is what it is. which is what it is um and um uh i uh you know um other things i'm hoping again the general trend to uh less violence in the world means that there's less um conflicts overall on average and that continues in that direction um and then you know the wild card of of china but there's always india which is going to exceed china in a number of um people and um what we're
Starting point is 02:07:13 seeing india is the diaspora of india is going to be a fundamental thing as we can see even tech companies in the u.s the percentage of them being led by Indian Americans is phenomenal. And that may be one of the great experts of India is technical people around the world. Yeah. And so- I think there's a huge export of Indian culture as well. Yeah, exactly. Through movies, television, music.
Starting point is 02:07:39 And we saw it at the Oscars this year. And I think that's gonna continue. Yeah, the three Rs, RRR. And, you know, I think it's, you know, that's gonna continue. Yeah, the three R's, RRR. Yeah, RRR and the song. It's just an amazing, I watch these just to understand, or the three idiots, if you haven't seen that, it's a must see, one of the biggest, all the Hindi, Bollywood movies.
Starting point is 02:07:58 So I would say that would be another thing. I would say maybe more of a influence of India on the world stage. What's a pie in the sky idea though that you're playing around with? Like all of those predictions are fairly grounded, I think, but what's a more harebrained thought that might've occurred to you
Starting point is 02:08:16 where you think things might go that- Well, this is, again, this is maybe a hope. I'm hoping that we really do have lab grown meat available, animal cell based. Cellular, yeah. Cellular, whatever, clean meat. They keep changing the terminology. Whatever it is, you know what it is.
Starting point is 02:08:38 I'm hoping that that is commercially available in many, many different varieties by then as someone who doesn't eat mammals. And so I'm really looking forward to that. Yeah, I mean, I think that's an inevitability. It's just a cost thing right now. The technology exists. It's commercially available in restaurants in Singapore
Starting point is 02:08:58 in a limited way. It's too expensive, but the technology, like they're continuing to iterate on that pretty rapidly. And it's just a matter of scale, I think. Well, in theory, but to me, that's a pie in the sky because I'm interested in not just replicating the existing animals, but actually making up whole new super meats.
Starting point is 02:09:17 Out of extinct animals? Is that what you're saying? Yeah, right, let's eat the mammoth. This goes back to your- Mammoth burgers. Your species project, right? Like you wanna have a wooly mammoth burger? Yeah, right, let's eat the mammoth. This goes back to your- Mammoth burgers. Like your species project, right? Like, yeah, you wanna have a wooly mammoth burger? Yeah, why not? Oh, man.
Starting point is 02:09:31 I'll stick to my plants. I do think it is interesting what's happening in that space. And there is a consumer acclimation phase that we have to go through because people are, I think they have a certain reaction to that. Yeah, yeah. But I think over time, we'll go away,
Starting point is 02:09:54 but you know, right now. Yeah, well, it has to be, I'm ready for it and I'll pay a little bit extra for it. So I'll be one of the first customers. And I have been trying them in the Silicon Valley ones. Have you tasted it? I've tasted several, but not actually the beef or the mammals. I've tasted fish and cheese, milk products, but not the actual meat.
Starting point is 02:10:21 And hamburger is easy to do because it doesn't have the structure, but to make a steak, to mix those two, I really haven't been able to do so far doesn't have the structure, but to make a steak is a, to mix those two is really haven't been able to do so far, but they're working on it. And I'm looking forward to it. A lot of money and a lot of smart people are on that problem. What are the things on that note
Starting point is 02:10:38 that you think we're gonna look back on, I don't know, 50 years from now, a hundred years from now and just cringe. Yeah, it may be eating meat might be one of them. I have another idea that which is kind of maybe trivial, but I thought it was very possible, which is that it may be embarrassing for us who have names assigned by our parents.
Starting point is 02:11:00 Yes, I saw the New York Times article where you had a list of these things and that one definitely popped out to me. It's like arranged marriages. It's like, yeah, of course you're gonna choose your own name when you're 16 or whatever it is. I wish we all did that. And the idea that you have a name assigned by your parents,
Starting point is 02:11:22 I think is just gonna be embarrassing. Right, that's very interesting. I mean, I feel like that's a sort of progression or evolution beyond the gender conversation that we're having right now, like a natural extension of that. You added to that wrapping food in plastic, that's a no brainer.
Starting point is 02:11:42 Getting off the summer from school, we're just gonna be educated around the clock? Why not? Rainer getting off the summer from school. Yeah. We're just gonna be educated around the clock. Why not? I went to this one school where the teacher were not allowed to teach you anything except how to learn. So yeah, I think that's, well, you should be taking your vacations when you want to,
Starting point is 02:11:59 rather than just in the summer. So it doesn't mean you have to go to school all year. It just means that you don't have to take all vacations together. Well, I think these technologies are so powerful and our education system is so lagging in terms of acknowledging the power and the capacity for these technologies to revolutionize
Starting point is 02:12:19 how we learn and what we learn. Like we're learning the wrong things. We're not learning the things we should be learning. And we're not training young people to leverage the best of what technology has to offer in order to really kind of nourish their educations. Exactly, and I have one word, YouTube. YouTube is this-
Starting point is 02:12:41 That can go both ways. It's phenomenal under appreciatedappreciated the way in which it's accelerating our culture and the role it's playing in education today. And it has great potential to be even more, I'm gonna accelerate. And I don't think they even know they being at YouTube don't even realize what they have.
Starting point is 02:13:02 And the part of the problem is that unlike a bookstore or a newsstand where you can go in and see what's being covered, there's really no way for anybody encountering it to have any idea of its depth and breadth. And it's this vast ocean that is kind of subterranean and people don't even understand the way in which it's working on learning and accelerating every field from brain surgery to science.
Starting point is 02:13:34 It's really incredible. Yeah, I think we're ripe to see tons more innovation in the space of education. Absolutely, absolutely. And kind of pivoting back to the advice piece before I let you go, I did wanna touch on one last thing, which is this thing that you did a while back
Starting point is 02:13:53 where you lived your life for six months Yeah. as if you were going to die. And this became the subject of this American Life podcast. So explain a little bit about like what you did, why you did it and kind of what you learned from that experience. Well, first of all, I told the story
Starting point is 02:14:12 on one of the first American, This American Life and I'd never told it before. I've never told it as well since. So I would urge people to go look for it. We'll link it up in the show notes. It should have been dead. But the short version was that i got this assignment after religious conversion in jerusalem to live as if i was going to die in six months
Starting point is 02:14:30 and actually take it seriously so i suspected knew as a healthy 20 year old that i wasn't probably going to die but i had to really take it and prepare to do it. And that's what I did. And it took six months and I was riding my bicycle across to my end date. And I was surrendering the future the entire time and not taking pictures because who needs pictures in three months. And that stripping away of my future was really important because when I was reborn on the next day after the six months I suddenly had my future before me and I and I realized how important it was to have a future forward how how necessary and humane and how inhumane and torturous it was to be stripped of a future. And that kind of instigated my interest in exploring the future and really trying to develop it
Starting point is 02:15:31 because I think it's an essential part of being a human being is to have something in front of us. Yes, how we contemplate, how we think about the future, how we plan for the future and just conceptualize it really is part and parcel of what makes us human and is part of the motor or the motivation for getting up in the bed, out of bed in the morning and planning how we're gonna live.
Starting point is 02:15:58 Exactly, right. And we have, in addition to being given these incredible bodies, that we're given time, and time only moves forward. That is the future. And so it's not only do we have this incredible blessing of being able to be put into things that have impact, that we can actually have impact,
Starting point is 02:16:17 unlike being intangible beings of light, we have things that we can do and make and get hurt and hurt others and help others and build stuff. But we also get the time. We get the arrow of time going forward, knowing that we have time in a future. And that to me is the great ride that we're on. The angels in heaven are weeping to see us squander it.
Starting point is 02:16:43 Yes. On the subject of squandering it, I mean, I think the human mind is, you know, oriented around conceptualizing the future in the context of self-interest or optimizing self-interest and also isn't great about pondering the future in a long-term sense, but more in a short-term sense. And long-termism is something you've thought a lot about,
Starting point is 02:17:11 is something you care a lot about, and I think is something that's also percolating up in the culture and becoming a bit of a zeitgeist thing where more and more people are talking about the importance of thinking about and approaching our problems from an extremely, you know, long lens point of view. Right, it's taking the kind of a civilizational scale
Starting point is 02:17:32 or a generational scale is what we like to put about it. Thinking in terms of generations, maybe working on something that might not be finished in your own lifespan that might require generations to complete like a cathedral or a road system or something grand like that and um but even beyond those kind of grand plans what we want to do is to kind of help people like ourselves become good ancestors to to actually have do something so that in the future they may say, thank you ancestor for having started that or done that in the way that most of what we surrounded ourselves here
Starting point is 02:18:10 has been built by previous generations and we should thank them. So what can we do to be a good ancestor besides plant a tree? Right, and what is the answer to that for you? Trying to increase learning in the world so that we can unleash opportunities for the maximum number of people born and yet unborn
Starting point is 02:18:36 so that they can share their genius with us in themselves. Beautiful. I think that's a great place to land the plane for today. I have a million other things I could have talked to you about today. I'd love to have you back. In the meantime, excellent advice for living wisdom. I wish I'd known earlier is your new book.
Starting point is 02:18:58 Everybody check it out. This is like the perfect book to just, you know, basically open to any page and have a thought for the day. I love it. And you're a national treasure. Thank you. A global treasure. An international treasure.
Starting point is 02:19:12 A cosmic treasure. Okay, well thank you. A cosmic treasure, yes. You're a gift. You're somebody I've, I followed you for a very long time. And like I said, at the outset, I was nervous to meet you
Starting point is 02:19:22 because I've been very invested in the work that you've done for many years. So it really was an honor and a gift to have you here to share with me today. It was my pleasure. I enjoyed every minute. You're a great interviewer. I just love your presence.
Starting point is 02:19:33 Thank you for being you. I appreciate it. Thank you, Kevin. Cheers, peace. Bye. Plants. Progress. Progress. There you go. Awesome.
Starting point is 02:19:55 That's it for today. Thank you for listening. I truly hope you enjoyed the conversation. To learn more about today's guest, including links and resources related to everything discussed today, visit the episode page at richroll.com, where you can find the entire podcast archive, as well as podcast merch, my books, Finding Ultra, Voicing Change in the Plant Power Way, as well as the Plant Power Meal Planner at meals.richroll.com. If you'd like to support the podcast,
Starting point is 02:20:26 the easiest and most impactful thing you can do is to subscribe to the show on Apple Podcasts, on Spotify and on YouTube, and leave a review and or comment. Supporting the sponsors who support the show is also important and appreciated. And sharing the show or your favorite episode with friends or on social media
Starting point is 02:20:46 is of course awesome and very helpful. And finally, for podcast updates, special offers on books, the meal planner, and other subjects, please subscribe to our newsletter, which you can find on the footer of any page at richroll.com. Today's show was produced and engineered by Jason Camiolo with additional audio engineering by Cale Curtis. The video edition of the podcast was created by Blake Curtis
Starting point is 02:21:12 with assistance by our creative director, Dan Drake. Portraits by Davy Greenberg, graphic and social media assets courtesy of Daniel Solis, Dan Drake, and AJ Akpodiete. courtesy of Daniel Solis, Dan Drake, and AJ Akpodiete. Thank you, Georgia Whaley, for copywriting and website management. And of course, our theme music was created by Tyler Pyatt, Trapper Pyatt, and Harry Mathis. Appreciate the love, love the support. See you back here soon.
Starting point is 02:21:39 Peace. Plants. Namaste. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.