Moonshots with Peter Diamandis - Elon Musk on Abundance, AGI, and The Media in 2024 | EP #79 (X Spaces)

Episode Date: January 4, 2024

In this episode, Peter and Elon hop on X Spaces to discuss Data-driven optimism, solving grand challenges, uplifting humanity, Digital Super Intelligence, Longevity, Education, and Abundance in 2024.�...� 12:42 | The Revolution of Communication Technology 22:12 | Redefining Overpopulation and Healthspan 1:10:05 | The Quest for Safe AI Elon Musk is a businessman, founder, investor, and CEO. He co-founded PayPal, Neuralink, and OpenAI, founded SpaceX, and is the CEO of Tesla and the Chairman of X.  Listen to spaces on X: https://x.com/PeterDiamandis/status/1742713338549997884?s=20  Learn about the latest XPRIZE ____________ I only endorse products and services I personally use. To see what they are,  please support this podcast by checking out our sponsors:  Use my code PETER25 for 25% off your first month's supply of Seed's DS-01® Daily Synbiotic: seed.com/moonshots  ProLon is the first Nutri-technology company to apply breakthrough science to optimize human longevity and optimize longevity and support a healthy life. Get started today with 15% off here: https://prolonlife.com/MOONSHOT  _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now:  Tech Blog Get my new Longevity Practices book for free: https://www.diamandis.com/longevity My new book with Salim Ismail, Exponential Organizations 2.0: The New Playbook for 10x Growth and Impact, is now available on Amazon: https://bit.ly/3P3j54J _____________ Connect With Peter: Twitter Instagram Youtube Moonshots Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Is crypto perfect? Nope. But neither was email when it was invented in 1972. And yet today, we send 347 billion emails every single day. Crypto is no different. It's new. But like email, it's also revolutionary. With Kraken, it's easy to start your crypto journey with 24-7 support when you need it. Go to kraken.com and see what crypto can be. Not investment advice. Crypto trading involves risk of loss. See kraken.com slash legal slash ca dash pru dash disclaimer for info on Kraken's undertaking to register in Canada. Order up for Damien. Hey, how did your doctor's appointment go, by the way?
Starting point is 00:00:34 Did you ask about Rebelsis? Actually, I'm seeing my doctor later today. Did you say Rebelsis? My dad's been talking about Rebelsis. Rebelsis? Really? Yeah, he says it's a pill that... talking about Rebelsis. Rebelsis? Really? Yeah. He says it's a pill that... That's right. Did you know it's also covered by most private insurance plans? Well, I'll definitely be asking my doctor if Rebelsis is right for me. Rebelsis. Ask your doctor or visit rebelsis.ca. Order up for Rebelsis. Following is a year's conversation I had with Elon Musk, arguably the greatest entrepreneur of our time, perhaps the greatest entrepreneur ever. During this conversation, which is called The Coming Age of Abundance, we talk about
Starting point is 00:01:12 how technologies, including AI and humanoid robotics, are creating a world of abundance, uplifting humanity across food, water, energy, healthcare, education. We'll be talking about longevity. We'll be talking about longevity. We'll be talking about AI. We'll be talking about the decreasing global population. And what are the reasons that you should have for being optimistic about the future? Join me. Happy New Year, Elon. Happy New Year. 2024. I love 2020. Yeah, it feels like the future. It is the future. It is the future.
Starting point is 00:01:47 It's going to be awesome. I'm hoping we get a little bit of conversation on some hope and good news for folks on this basis. I think people can use it. Peter, you are the most optimistic person that I know of by far. You're the most optimistic person that I know of by far. I guess it is refreshing to hear such optimism. I think people need a positive mindset. I think it's self-fulfilling prophecy to a large degree.
Starting point is 00:02:25 If you're pessimistic, and we can talk about how the news media just decimates our minds constantly. Yeah, the news is so negative. I mean, it makes me sad to read the news, frankly. Well, let me ask you a question. I don't watch network news, and I don't read any newspapers. They couldn't pay me enough money to do that. I'll accidentally read the news and I'll just be sad. It's insane. Well, I mean, as you know, the news, the daily news really attempts to answer the question, what is the worst thing that happened on Earth today? It is. And let me show it to you every five minutes in your living room over and over and over again. Yeah, and it's a big world. There's 8 billion people on Earth. So, you know, somewhere on Earth, something horrible is happening every single day.
Starting point is 00:03:16 But there's also great things happening every single day. You know what I call CNN? The Crisis News Network or the Const negative news network. And the problem is... They've got to scare you. Otherwise, you know, if they say, hey, it's been a pretty good day overall. You know, violence is an all-time low. We've got more access to food, energy, water, water health care education on the planet than ever
Starting point is 00:03:46 before i mean people would just start watching horror movies i think instead yeah yeah the challenge is it's uh it's our neural nets the wiring of our brains you know evolved in a world of constant danger and so we're sort of just wired for fear and scarcity constantly. Yeah. Well, yeah, I think you've made this point. Maybe others have. It sort of makes sense as an evolutionary asymmetry that we would respond more to danger than to reward. In that the consequences of danger could be fatal. It could be like, well, if you go over there, there's a lion that's going to eat you or some neighboring tribe that's going to kill you, and it's game over. Your genes are out of the gene pool.
Starting point is 00:04:40 Yeah, you're out. Yeah, you're out. Whereas, say, news that there's a nice bush with berries over there, it's nice to have. It's optional. But in one case, you die. In another case, you're hungry. But death is worse than hunger. So that's why. Basically, anyone who did not respond more to negative news than positive news didn't make it. They were selected against, for sure. Yes, I mean, anyone who was complacent about where the lion was, was eaten by the lion. And, you know, the reality is, you know, the news media has one job, to deliver your eyeballs to their advertisers. And when we pay 10 times more attention to negative news than positive news, that's all we get 24-7. So, I mean, listen.
Starting point is 00:05:34 It's inevitable. It is. I mean, I do get my news invariably on my feed on X, but I also get all the great things happening in the world because I can selectively choose to watch that. But when you're watching TV or, you know, in the newspaper, some editor someplace or some producers deciding what gets fed into your mind, and it can really screw with your mindset. Yes, exactly. screw with your mindset um yes exactly yeah so i mean this this space is uh uh called it the coming age of abundance and you know you were really when my first book came out abundance the future is better than you think uh you were super supportive and i and i appreciate that was 12 years ago and i think the story whoa 12 amazing yeah and it's gotten so much better. It has, in so many ways.
Starting point is 00:06:27 Not every way. I mean, obviously we go back a ways. I remember when we were at a day of this party in Brazil. How long ago was that? It was his 40th or was it his 30th? I don't know. It was his 30th. This was just when
Starting point is 00:06:51 SpaceX was forming. It was 2003. It was just before the X-Prize was won. I was trying to convince you not to build rockets. 21 years ago. You could have a kid that that has legally or legally up kids you know yeah um so 21 years ago uh yeah so and you know
Starting point is 00:07:20 things things are mostly better you know i want to give everybody listening a dose of hope and optimism on the abundant side because the world has gotten better in so many ways all you hear about is the negative constantly and i think that's going back to our sort of core dystopian mindsets from you know evolving 100 000 years ago but if we just look at some of the, look at some of the areas, right? So like global extreme poverty, right? I mean, what's more, what's, what's a more important metric? You know, here's the numbers. 90% of the world was in global extreme poverty in the 1800s. In 1981, it was 42%. Today, it's under 10% of the world. Right?
Starting point is 00:08:06 Yeah. Hunger is actually rare, and it used to be common. Exactly. And another one is obviously one, Harry, that you're leading the charge on is energy. You know, we used to kill whales to get whale oil to light our nights, and we ravaged mountainsides, and we drilled kilometers into the ground. What's the figure? It's like 8,000 times more energy hits the surface of the Earth from the sun than we consume as a species. But what's the rate at which batteries and solar is increasing?
Starting point is 00:08:37 It must be massive. Yes. I mean, at Tesla, we've made a couple of presentations, one sort of simplistic and then one in extreme detail on how to make Earth completely self-sustaining from an energy standpoint. Sure. And demonstrating that if you break down all of the raw materials for lithium-ion battery and for solar, you can easily make Earth. I mean, there's no shortage of materials. It's easy. It's a lot of work, obviously. But there's not like some critical material that we don't have enough of in order to make
Starting point is 00:09:18 Earth fully self-sustaining. Even if the only way that you powered all of industry on Earth and all power, including heating and transport, electrically, you could do that with solar and lithium-ion batteries and not, the most precious metal on the planet was aluminum. It was more precious than gold and platinum. And even though the Earth's crust is 8% bauxite, basically aluminum, it was just so energetically difficult. It wasn't that it was scarce. It just wasn't in usable form yet. And that's what technology does. It takes something which is scarce and not usable and makes it usable, right?
Starting point is 00:10:11 Yeah. Aluminum oxide is extremely common, but it is a low energy state. In fact, thermite is just iron oxide oxide rust and and pure aluminum um and the the energy difference between uh iron oxide and aluminum oxide is so great that it generates an incredible and enough heat to melt through steel so that's what thermite is so uh So, yeah, you do need a lot of energy to turn aluminum oxide into aluminum. But, yeah, in World War II, there was a massive scarcity of aluminum for aircraft. Sure. And, in fact, in Britain, the Mosquito sort of fighter bomber was made mostly of wood, but it was done with – it was basically an early form of composites, but using stiff wood on the outside and light wood like balsa as a sandwich structure. It's pretty clever.
Starting point is 00:11:20 The whole thing was intended to to address the shortage and that and then then we get technology we get better uh better mechanisms of extracting the aluminum from the from the aluminum oxide from the bauxite and this happens over and over again in fact that's just what we do i mean i think the number was last year in 2023 or maybe in 22, we had more new electricity production from solar than from any other form. And you've done an extraordinary job on battery production. Yeah. And the battery production is growing actually almost at several times the rate of vehicle
Starting point is 00:12:02 production. So, you know, in some cases, almost 10x the rate of vehicle production. So, yes, there's a massive demand for batteries. And, you know, as the world uses more electricity, there's actually a lot more capability that the grid has if you can buffer the energy than without it because the vast majority of electrical grids assume no buffering. So they have to size the power plants for peak output,
Starting point is 00:12:38 peak power output, which is typically a hot summer day. which is typically a hot summer day. And then at night you can have anywhere from optimistically half power output to sometimes 1 10th of the power output. So basically the grid, almost everywhere, the grid is sized for excess electrical power output. And if you just buffer it with batteries, you can increase the output of the grid by two or three times.
Starting point is 00:13:16 I mean, to make the point here on the abundance theme, there is no limitation on energy, right? We are increasing the amount of energy per capita, and's a direct correlation between uh the gdp of a nation and its energy production right and the direct correlation between health and education and energy everything scales as you increase the energy per capita of a nation yeah yeah um it's about another category, communications. Another area that you're revolutionizing. I think the number right now, I just was checking it earlier,
Starting point is 00:13:55 it's like 6.9 billion smartphone users in 2023. Really? 86%. And that's what I got when I Googled it. I don't know what. 6.9, you don't say. Yeah. 6,900 million. 6,900 million? 6,900 million.
Starting point is 00:14:09 Let's go there. My 12-year-olds would say the same. It's like 85% of the planet's got a smartphone. Well, I mean, if you add up the total number of smartphones uh may ever made it exceeds yeah oh i think it so it's it's amazing so we've gone from like zero telephony to the majority vast majority of the planet in under a century um you know global internet as well the same thing and uh did i your your next iteration of starlink spacecraft have gone up uh the direct to cell phone okay uh yes that just went up um so you know we still have to prove that it works and all, but we're confident that even if these early satellites don't work, we're confident from a physics standpoint that it can work.
Starting point is 00:15:11 It is a challenge because we have to emulate a cell tower on the ground in order for the phones to accept the signal. So we have to do Doppler compensation, for example, and do some sort of... Because you've got the speed of light limitations. Pal, if it were easy, it would have been done already. Yeah, there's some speed of light limitations. So, you know, life is so fast and yet so slow um so um you know i i i think yeah a good way to think about light at least in the space context uh or for the lower context is it travels about 300 kilometers every millisecond
Starting point is 00:16:10 second in air or in space and then around just over 200 kilometers per millisecond in fiber. Hey everyone I want to take a quick break from this episode to tell you about a health product that I love and that I use every day. In fact I use it twice a day. It's Seededs DS01 Daily Symbiotic. Hopefully by now you understand that your microbiome and your gut health are one of the most important modifiable parts of your health. You know, your gut microbiome is connected to everything. Your brain health, your cardiac health, your metabolic health. So the question is what are you doing to optimize your gut? Let me take a moment to tell you about what I'm doing. Every day I take two capsules of Seeds DS01 Daily Synbiotic.
Starting point is 00:16:49 It's a two-in-one probiotic and prebiotic formulation that supports digestive health, gut health, skin health, heart health, and more. It contains 24 clinically and scientifically proven probiotic strains that are delivered in a patented capsule that actually protects the contents from your stomach acid and ensures that a hundred percent of it is survivable reaching your colon. Now if you want to try Seed's DS01 Daily Symbiotic for yourself, you can get 25% off your first month supply by using the code peter25 at checkout. Just go to seed.com slash moonshots
Starting point is 00:17:25 and enter the code PETER25 at checkout. That's seed.com slash moonshots and use the code PETER25 to get your 25% off the first month of Seeds Daily Symbiotic. Trust me, your gut will thank you. All right, let's go back to the episode. You know, another abundance area is health. I hear some kids in the background there. So interestingly enough, you know, child mortality,
Starting point is 00:17:51 I mean, probably, I mean, this is the one that hits me the most in terms of increasing abundance. Child mortality under the age of five was 42% a couple hundred years ago it was a coin flip of whether your kids survived and it's decreased now to under five percent and it's gone down by 50 percent in the last 30 years so just childhood mortality and women dying in childbirth all of these things people don't think about when they're listening to all the news and all the issues. And then life expectancy, my favorite subject, has gone up from 30 years old to 75 plus. I still disagree with you on longevity, though. That we should solve it or not? You think we should solve it? Well, listen, I'm not necessarily saying live forever, but I'd like to make it to 120 150
Starting point is 00:18:51 Yeah, I sort of wonder if we should not so the tunes Presidential elections And and do you really want them to get that life expectancy first? Well, you know, you know, I, I, I think we, I think being able to have the vitality, the cognition, the physical prowess, you know, that you have when you're in your 40s or 50s through the age of 100 that's my goal um i mean you don't want to you don't want to
Starting point is 00:19:32 you know you want to make it to at least 100 don't you um well i guess it does depend on whether i'm you know have dementia i don't think I'd want to be a burden on society or have dementia and not know what's going on. I'd prefer to be dead. Well, yeah, I think that's for sure. But let's assume that you had, you know, all the cognitive power you have today in your physical strength. Is there any reason why you wouldn't want to, you want to have an extended lifespan or healthspan? Yeah, sure. I guess I think we are, there's such a strong forcing function for life extension or healthspan extension that I think we will see
Starting point is 00:20:23 or health span extension that I think we will see at Masters in that area, whether I want them to be there or not. And actually, my opinion on the subject is that I think it's actually not that hard to solve. Because if you just consider arguments of symmetry, which are quite helpful, the cells in our body all age at almost exactly the same speed. I've not seen anyone who has an old left arm and a young right arm. That's correct.
Starting point is 00:21:00 I've never seen that, not even once. Yes. So how are the cells communicating? What is synchronizing their behavior? There's a very clear mechanism for synchronizing aging among the 30 to 40 trillion cells in your body. Depending on your body mass, you're typically going to have 30 to 40 trillion cells. That's correct. Keep in sync.
Starting point is 00:21:27 I mean, the other reference proof point is, you know, bowhead whales, one of the largest mammals, can make it to 200 years repeatedly. Greenland sharks can make it to 500 years and have babies at 200 years old. I remember when I was in medical school hearing that, I said, you know, why can they, why can't we? I said, it's either an engineering problem or a software problem, a hardware problem or a software problem.
Starting point is 00:21:51 And I think this is one of the biggest areas AI is going to give us is a real understanding. And then to your other point about your left arm and right arm, you know, when you have a baby, you know, when it's 30, 35, you're 40, 45, your baby starts out at zero. Yeah. I do find it remarkable that we decompress from a single cell to an adult human, and then we procreate, compress back down to a single cell. That is fascinating. You know, I mean, you sort of look at yourself as a sort of, you know, blast assistant and say, like, I haven't changed a bit. Here we go again.
Starting point is 00:22:40 Let's recycle. It's like the Big Bang. It's like, right, the big crunch. go again let's recycle it's like it's like the big bang it's like right in the big crunch um you know one of the things people argue about on extended health span uh and you know increasing the population getting to 100 120 150 and there's a concept called longevity escape velocity right that there's going to be a point at which for every year you're alive science is extending your life for more than a year and that's an interesting idea to think about um in which case you know accidents become really a thing to be concerned about um they
Starting point is 00:23:12 concern about overpopulation and and you've you've hit this multiple times right the and i saw you tweeting about it or sorry you're x-ing about it today. Sorry. But overpopulation is one of the biggest myths and biggest false. Overpopulation is utter BS. It's such nonsense. Earth is underpopulated, not overpopulated. Look out the window when you're flying across the country. It's empty.
Starting point is 00:23:51 Absolutely. Exactly. If your goal was to, like, flying from LA to New York to drop a bowling ball on a human, you would fail. a human, you would fail. You know, I think one of my goals, I think the greatest gift we can give people in this abundance world is increased health span. I mean, when you think about what people want, they want happiness and they want health, right? No one wants to die in a painful cancer or dementia. I have extended to you many times, my friend, come down to Fountain Health.
Starting point is 00:24:32 Let me put you through our program. The world needs you around for another 30 years. Okay. What should I do? What actions can be taken? This could be helpful for listeners on this discussion. So there are three things you need to know. Number one, is there anything going on inside your body that you don't know about? So the body is amazingly good at hiding disease.
Starting point is 00:25:04 So we found in our seemingly healthy adults, 2% have a cancer they don't know about. 2.5% have an aneurysm. 14.4% either have metabolic disease, coronary disease, neurodegenerative disease. And so your body is incredibly good at hiding disease, right? So you don't actually feel any cancer until stage three or stage four. 70% of all heart attacks have no precedence. Your body is compensating constantly. And for most of us, we know more about what's going inside our cars or airplanes than we do our bodies. Yeah. I had a question for you. How many, how many sensors are going up on, on Starship when you're launching? How many start sensors are going, are on board that vehicle, rough order magnitude,
Starting point is 00:25:56 getting back data? Well, I guess there's, when you count everything up, there's several thousand sensors. And if I were, I mean, there are 32 engines, so they count for the bulk of the sensors. Sure, but you also have stress sensors and looking at what's going on in the structures and avionics and communications across all the subsystems. And if I ask you how many... 39 engines, including the upper stage. So there's several thousand sensors. Way more sensors than our body. Yeah, I was going to say, if I asked you how many sensors do you have in your body?
Starting point is 00:26:30 And so we don't look, which is insane, because we do have the technology now to look, to determine is there anything going on I need to know about, and when's a good time to find out about, like now. Or what's likely to break, what's likely to undergo failure cycles, and then what's the most extraordinary therapeutics available to extend the human lifespan. So, I mean, for me, that's a big one. Let me ask you another abundance theme, which is education. a lot of abundance theme, which is education. Do you think any of our schools today,
Starting point is 00:27:09 middle schools or high schools, are preparing any of our kids for the future? Well, none that I know of. I mean, there probably are a few schools that are doing it, but probably 99% of schools are not. Yeah. Schools are very slow to change. And I think that there does seem to be a move away from teaching the fundamentals, you know, of writing well and math and history.
Starting point is 00:27:52 You know, I am concerned about the whole woke agenda and ideology permeating through education. Agreed. And actually being destructive to education. Agreed. I mean, you know, how do you think, do you have any thoughts i mean so at the end of the day i think our best our best health care and our best teachers are going to be ais right that understand everything and they demonetize and they democratize every aspect you mean the ai knows your your kids favorite color sports star movie star what they, the languages they know, what they did today. I mean, it's a way of giving every child on the planet the best education.
Starting point is 00:28:30 I mean, you funded back years ago, if you remember the Global Learning XPRIZE that we did, that we demoed in Tanzania with AIs on tablets. It was the earliest days of AI. Imad Moustak, who you know, was one of the winners of that competition, who went on and out to create stability. The challenge is I don't think the educational systems are going to give up that control anytime soon. If I look at how my kids are educated,
Starting point is 00:29:07 they seem to be mostly educated by YouTube and Reddit. Yeah. And X as well, I suppose. But they're constantly on the internet. That seems to be, and I guess a lot of kids these days, TikTok. Unfortunately, yeah. Yeah. So I think the education situation is problematic.
Starting point is 00:29:34 Not sure what to do about it. I think ultimately Khan Academy is one step in the right direction. But I think part of it is getting our educational institutions to first realize they're not preparing kids for the world that's coming I mean this hits another point we are such linear thinkers right we're projecting what we have and putting it out four or five years. I don't think people, let me ask you, do you think people are ready for what the world's going to be like in 2030? No. I don't know what the world's going to be like in 2030. So probably I wouldn't say that I'm necessarily ready. We definitely live in the most interesting of times.
Starting point is 00:30:30 You know, it's like this, allegedly this Chinese saying that may you live in interesting times is, is, is a, not a good thing. But I mean, I think personally, I would like to live in the most interesting of times. And this is the most interesting of times. I also think it's the best time to be alive, ever. The only time more exciting than today is tomorrow.
Starting point is 00:30:50 I mean, you know, by what measure... Yeah, we have environmental issues, but I think we have the best chance of solving those environmental issues with the technology we have now versus technology we had 10 years ago. Yeah, well, I generally think it's, as a general sort of rule of living, it is better to err on the side of being optimistic and wrong than pessimistic and right. If you're going to err on one side or the other, it's just a higher quality of life to err on the side of being optimistic and risk being wrong than pessimistic and right. I mean, it's better to be...
Starting point is 00:31:35 Just enjoy life, Paul. Optimism is going to make you happy. There was a study of 14,000 people. It was 14,000 women and 1,500 guys. And it showed that those who had an optimistic mindset lived 14% to 15% longer than those with a pessimistic mindset. I mean, mindset's a powerful thing. And I think undervalued by almost everybody.
Starting point is 00:32:06 One of my favorite examples of this was, do you know what the environmental disaster of the 1880s and 1890s was? I'm sure you know. Is this back to Wales? No, no, no, no. It's another form of life. It's the horse manure disaster. Oh, the horse shit problem. What is this horse shit? You're right. I mean, New York was basically horse manure and urine and carcass. It was terrible. People brought their motive power.
Starting point is 00:32:48 I mean, basically New York is not like shit. If you think New York subways are stinky right now, I mean, try it. It's just horse piss everywhere. Everywhere. People moved into downtown New York, Detroit, Chicago. They brought their motive power with them.
Starting point is 00:33:09 And, and the articles in the 1880s and nineties and into the 1900s, as you, as you read this, the projection was disaster. It was going to be a disaster because horse, horse, horse manure was like growing. They had like special parking lots for horse shit at the corner of every street. Well, it seems like there's also a challenge because I think if a horse lives, you know, for on the order of 15 years, that means one-fifteenth of all horses are kicking the bucket every year. horses are kicking the bucket every year.
Starting point is 00:33:49 So if you've got like 300,000 horses, that means you've got 20,000 horses dropping dead every year. And then you need a horse to move the horse, the dead horse. Or just cover it with horse manure and let it decompose. Yeah, I mean, there's going to be decomposing horse carcasses throughout major cities.
Starting point is 00:34:09 And then it's like, well, whose dead horse is this? People are probably quick to claim a life horse, but they're not rushing to claim a dead horse. But then innovation came along, and here comes the car and solves the problem. And I think that's the problem that we keep on forgetting. We forget that we are incredibly innovative at solving problems. That's what humans are amazing at.
Starting point is 00:34:39 Yeah, yeah. True, true. Actually, on the whale front, I don't even know. Back to the whales. It's a whale of a tale. Whale tales. Okay. So a lot of people think that the low point on whale population was in the 1800s because whales were being hunted for whale oil. But actually the low point by far was in fact in the mid-20th century because of a bureaucratic
Starting point is 00:35:24 error in the Soviet Union. Okay. This I have to hear. So they would always have these five-year plans and quotas for whale tonnage. Now, they didn't actually have even whether the whale tonnage was usable tonnage. Anyway, they just had a quota for whale tonnage. And what they would do in the Soviet Union, they would just keep increasing the quota of everything every year, every five years. So the whale tonnage just got higher and higher and higher. If you're a captain and you
Starting point is 00:36:06 had a high whale tonnage, you'd get a medal and a raise. And if your whale tonnage was low, you'd be sent to the gulag. You get what you incentivize, baby. Yes, incentives matter. And so, it got to, it just got to absurd levels. And
Starting point is 00:36:22 you had Soviet whaling ships going into US and Australian waters, desperately trying to find whales. They would catch the whale, weigh the whale, and then dump the cargo overboard. I mean, there's a whole internet rabbit hole that you can go down in this. Which is basically a lesson in the follies of central planning.
Starting point is 00:36:48 And this is the problem of keeping laws on the books, way after they're useful to society. Well, we do actually have a fundamental issue with the accumulation of laws and regulations because they are immortal, and humans are mortal, as we were just discussing with life extension. So naturally, every year, you're going to have this accumulation of laws and regulations until eventually everything is legal.
Starting point is 00:37:21 You mean everything is illegal? Everything is illegal, yes. Nothing is allowed because you're overlapping laws and regulations, and some of which are in fact contradict each other. Whether you go left or right, both left and right are illegal. My God.
Starting point is 00:37:42 You know, SpaceX, shame on the uh um my god you know um it like spacex is it i mean sharing the you know the the doj in this respect in this particular case where the doj as you may know uh is suing spacex for um hiring only permanent residents and and um citizens of the us the reason um that we did this was because we were told very clearly that if we did not hire permanent residents of the United States, that that would constitute a violation of international trade traffic and arms regulations, ITAR, and the entire executive team of SpaceX and the board would go to prison. Sounds like a good motivation. Yes. And so we were literally told this by the government in very clear terms. And you're well aware of ITOS.
Starting point is 00:38:29 I am well aware of ITOS. Yes, it's a nightmare. Which, by the way, puts us in a non-competitive world against other nations. Yes. It's pretty bad so um but then then the doj um you know is suing uh spacex um for not hiring asylum seekers an important point here not asylum those who have been granted asylum those who are seeking asylum. There's a lot of those. There's a lot of people seeking asylum. Yes. So we're damned if we do and damned if we don't.
Starting point is 00:39:14 So if we hire someone who's not a permanent resident, we're breaking the law. And if we don't hire someone who's not a permanent resident, we're breaking the law. So this is an example of the madness that we're facing. Buddy, I want to compliment you on something which, and I've known you for long enough to have seen you gone through this, where you have bet everything over and over again. You bet your entire fortune, got into debt to do the things that you believe in. And I have a question to ask you, which I've been dying to ask, and I'm going to start making this known. There are so many billionaires on the planet who have tens of billions and hundreds of billions of dollars who are effectively sitting on it and not changing the world. who are effectively sitting on it and not changing the world.
Starting point is 00:40:11 Not putting it in, you know, other than for increasing its return, which is not a bad thing. I think the more wealth and free energy there is out there to do things. But can you speak to that? I don't know if you're willing to, but there are a few people, Can you speak to that? I don't know if you're willing to, but there are a few people like Mark Benioff and Eric Schmidt and yourself top of the list who invest on making the world a better place, solving global grand challenges. Thoughts on that? I'm going to open something up. Sorry, I just have some kids and stuff around. No problem.
Starting point is 00:40:45 Sort of family noises in the background. So let's see. Yeah. Well, yeah, I mean, I do think that smart people with resources should care about the good of civilization, the future of civilization, even if there is not, even if they're not particularly altruistic, because you can't really, you cannot exist absent civilization. If civilization collapses, it's all over. You know, it's like people have got these sort of bunkers in other countries or Hawaii or whatever.
Starting point is 00:41:40 And I'm like, listen, do you really think that you're going to make it in an apocalyptic situation? They'll come and find you in that bunker and they'll pry you out and get your stuff. And it's going to suck anyway. So really, smart people with resources, any smart person with resources, they just have some long-term perspective. I mean, listen, it's you and Jeff Bezos right now who've got the biggest long-term perspective. And I see, you know, I'm just curious about how do you incentivize other people to really help the world accelerate uh you know uh and and make the world
Starting point is 00:42:30 a more productive place right the best way to become a billionaire is help a billion people the world's biggest problems the world's biggest business opportunities uh do you believe that? well I guess maybe we should just talk to people more I guess just talking to them and you know I think perhaps raising conscious awareness the fact that
Starting point is 00:42:57 there is no living without civilization one doesn't actually have to make an altruistic argument. You can actually make a purely self-seeking argument. Life would be miserable without civilization. And if you want to know
Starting point is 00:43:20 what life would be like without civilization, just go try living in the forest naked for a day. And you will be naked and afraid and and quickly realize civilization is awesome you have to eat bugs and get eaten by bugs and and so we're so interdependent upon each other to enable the state of technology and capability we have today. I'm curious, is there anything that you think that isn't becoming more abundant out there or anything that we have an abundance constraint on? I'll just mention, answer that first. I don't want to come and talk about the carbon removal XPRIZE that you funded. One second. You know, we're creating abundance of food, energy, water, healthcare, education.
Starting point is 00:44:18 Are we constrained in any way? No, not really. No. I mean, no, I think I agree with you that the future most likely has abundance. We shouldn't be complacent about the future Complacency and entitlement are not a recipe for success but the most likely outcome is one of abundance of goods and services.
Starting point is 00:44:47 That is certainly where we're headed. Yeah, when I saw you last... That's highly likely where we're headed. Very likely, yeah. When I saw you last, you said, you know, definitely abundance after AGI. And I saw, you know, when you were talking about Optimus, which is awesome, by the way. Let's just start with that. Congratulations.
Starting point is 00:45:09 Well, thanks. Optimus, I need to make sure that Optimus doesn't add to civilizational risk. Because you don't want like a billion of these things all with centralized control. What could go wrong? Has anyone made a movie about that? Unless they obey you. Well, I don't think they should obey anyone. I think you have to have
Starting point is 00:45:45 local control. It has to be decentralized because any central control is going to be problematic. Because you just can't be... It could be a rogue AI takes... I mean, let's look at a movie.
Starting point is 00:46:01 A rogue AI, Terminator, takes over somehow gains control of the mothership that controls all the, you know, the Optimus robots or something like that. That is, basically, it needs to be impossible for that to occur. Let me ask you, I always love combining your companies cause it's like, you know,
Starting point is 00:46:27 it's like a Reese's peanut butter cup, you know, Starlink and, and, and, and, and Starship together. How about Optimus Neuralink?
Starting point is 00:46:40 When can I plug into an Optimist with my Neuralink connection? Well, hopefully the first Neuralink in a human will hopefully be soon, within the next, maybe this month or next. or an X. Now, this is really just at first just trying to give quadriplegics and tetraplegics the ability to control their phone and computer. It's basically like the first product is telepathy, Or telekinesis. And then the second product, so tentatively called Blindsight, where we can restore sight even if somebody has lost both eyes and their optic nose. Amazing. Go straight to the visual cortex, yeah. Yeah, exactly.
Starting point is 00:47:48 Now, these things already work in monkeys. And I'd just like to reemphasize, no monkey has ever died because of a neural link. And we treat our monkeys extremely well. Last time I spoke to you about this, you were playing Pong against Pager, I think. That's right. Actually, it turns out monkeys love playing video games. They're really just like us. I mean, they love eating snacks and playing video games. I'll tell my 12-year-old boys about that.
Starting point is 00:48:20 Yeah. I mean, you see the video of Pager playing Monkey Mind Pong, Yeah, I mean, you see the video of Pedro playing, you know, monkey mind pong, you know, just telepathic, playing pong telepathically. He's not restrained in any way. He's just sitting on the sort of tree branch. Drinking smoothies. Drinking smoothies, drinking a smoothie and playing pong. He's not held down you know um in fact he gets upset when we
Starting point is 00:48:46 take his video game away just like like humans just like our kids uh that's yes awesome and and this i mean giving sight to the blind i mean it's biblical stuff and and this is again coming back to the original uh you know kicking off the new year with a positive mindset with an abundance mindset with an exponential mindset with a moonshot mindset right which i think and i think i hope you agree like i think for entrepreneurs and people listening like the single most important thing we have is our mindset how we see the world i mean would you agree with that? Yeah Yeah, I mean you can choose to be me. I mean happiness is I think is a decision hmm
Starting point is 00:49:35 Every it largely it I mean there's other people that have chemical imbalances, but for most people it the difference between being happy and unhappy is Deciding to be happy. Over the years, I've experimented with many intermittent fasting programs. The truth is, I've given up on intermittent fasting as I've seen no real benefit when it comes to longevity. But this changed when I discovered something called Prolon's 5-Day Fasting Nutrition Program. It harnesses the process of autophagy. This is a cellular recycling process that revitalizes your body at a molecular level.
Starting point is 00:50:10 And just one cycle of the 5-Day Prolon Fasting Nutrition Program can support healthy aging, fat-focused weight loss, improved energy levels, and more. It's a painless process and I've been doing it twice a year for the last year. You can get a 15% off on your order when you go to my special URL. Go to ProlonLife.com, P-R-O-L-O-N-L-I-F-E.com backslash moonshot. Get started on your longevity journey with Prolon today. Now back to the episode. I want to take a second for everybody listening to point something out. You know, Ilan, it was 2021. I was texting with you and I said you should fund another XPRIZE. You funded an XPRIZE years ago on teaching kids in the middle of Tanzania, reading, writing, arithmetic without any adults, no schools around, just on a tablet
Starting point is 00:51:04 they were handed and the software had to teach them, and it was an amazing success. And then in 21, I asked you if you'd do an XPRIZE on carbon removal, and you said yes almost instantly. And we launched it three months later. It was like the fastest yes to an XPRIZE launch ever. So thank you for that. I want to give you a quick update on it, if I could. You're welcome. Absolutely. Well,
Starting point is 00:51:37 I hope the Education XPRIZE and the Carbon Removal XPRIZE result in good outcomes. I'm sure we had 6,000 teams enter the Carbon Removal Prize. We have 1,300 active teams in the competition right now as we down select. About 36%, 460 are focused on carbon capture, 430 are land-related capture, 240 are ocean-related capture. And we've given away 20 million of your money already 5 million to students and 15 million to the a million to the top 15 teams and the finals are coming up in two years in earth day of 2025 and uh interestingly enough you know the winning team needs to demonstrate a megaton level capture that can scale to gigaton level capture. And so just the final competition is going to capture four megatons of carbon, which is twice what's being done on an annual basis today. So good progress so far.
Starting point is 00:52:42 Great. That's good to hear. progress so far. Great. That's good to hear. Well, as we begin to wrap up, what other thoughts for people on abundance, what other mindset thoughts do you have for folks here? Um, hmm. Well, I do think that the birth rate is too low for humans. I'm always going on about that. Longevity, baby. I'm solving it with longevity. I'm going to keep people alive longer.
Starting point is 00:53:21 Okay, well. And robots and AI. Sure. and robots and AI sure but I mean it's just the current situation is it's grim yeah a lot of countries are if we look at say Korea
Starting point is 00:53:38 Italy they're losing roughly half their population per generation that means three generations they're one roughly half their population per generation. Yeah. That means three generations, they're one-tenth of their current size, and with the one-tenth that remains being very old. Yeah, the numbers for folks, the replacement number is 2.1 children per family on average. And Europe is at 1.5. Asia dropped from 5.7 in the 1950s to 1.9 today, which is crazy. North America dropped from about 3 to 1.6. We're below the replacement level in
Starting point is 00:54:20 the United States. And it is an issue. I mean, we need smart people on the planet. Yeah. Well, I mean, if we don't make new humans, we won't have humanity. And even with longevity, we'll live longer, but we're not going to live forever.
Starting point is 00:54:41 So, I think we just need... It's... I'm concerned that a lot of people think we're not going to live forever. So I think we just need, it's, you know, I'm concerned that like a lot of people think that the planet is overpopulated and that's one of the things contributing to a low birth rate. In fact, some people I've encountered think
Starting point is 00:54:55 they're basically being martyrs for not having kids and that's just, it's just not true. I think we should take the position that we we actually have a civic responsibility to have kids to at least keep uh the human population constant um ideally we should grow it but we should at least not have population collapse which is what we currently have i think people uh people fear the future and i mean the conversations i've heard is i don't want to bring children into this world it's too dangerous ai is going to destroy humanity uh the environment you know we're
Starting point is 00:55:37 destroying the environment and so forth so i think part of it is getting people to be optimistic about the future versus pessimistic, which is one of, as you said in the beginning of this space, is one of my missions. If people think the world is getting better and they have a hand in making it a better place, you most definitely do. Yeah, I think people should be optimistic about the future. The Earth can handle far more humans than currently exist. And the danger is not a population explosion, but population collapse. So I would just encourage as many people as possible to have kids.
Starting point is 00:56:19 And ideally have a lot of kids because they're going to make up for those who, whatever he's done. Buy an extra bottle of wine tonight, folks. Yeah, it's a big deal. So. Let's close out on the conversation of AI and AGI, which I know a lot of people are always interested in. And there's a real fear um about ai um what would you say to dissuade people i mean it's you you've you've i've heard you say 80 probability we make it through and we need to protect the downside um can you speak to that what how can people walk
Starting point is 00:57:02 away more optimistic than pessimistic on this front? And how do you think about, you know, is it containment? Is it shaping how we train our AI systems? How do you think we navigate our digital superintelligence? Well, the rate at which AI is growing really boggles the mind. So it currently seems as though the amount of compute dedicated to artificial intelligence is increasing by a factor of 10 roughly every six months. It's faster than annual, that's for sure.
Starting point is 00:57:52 I recently heard today about a gigawatt class AI compute cluster. Wow. I think it's being built in Kuwait or something to that effect. And it's like 700,000 V100s, which is a couple of generations above, two generations beyond the H100 that's currently in production.
Starting point is 00:58:23 So this is a staggering amount of compute. And there are many such things. That's just the biggest one I've heard of so far. But there's a 500 megawatt installation happening. And there's multiple 100 megawatt installations in the works. It's not even clear to me what you do with that much compute. Because when you actually add up all
Starting point is 00:58:49 human data ever created, you really just run out of things to train on quite quickly. Like, you know, if you've got maybe, I don't know, 20 or 30,000 H100s, you can train on... Synthetic data? Almost everything. Yeah h100s you can train on synthetic data almost everything yeah
Starting point is 00:59:07 yeah you basically you have to have to have synthetic data um because for certainly well under 100 000 h100s you can train on all human data ever created including video uh um and it's not and it's not just the compute which is the major scarce resource but it's not just the compute, which is the major scarce resource, but it's also the number of entrepreneurs focusing in this area, the amount of capital that's going into this area, the amount of data available. I mean, it's all increasing and it's all feeding on itself. And so it's just hitting your point about the speed at which it's progressing. I think the word awesome comes to mind, or staggering. Yeah, it's really staggering and for sure, so I'm just trying to give a sense of scale.
Starting point is 00:59:56 I've never seen anything move this fast, and of any technology, this is the fastest moving thing. So in terms of aiming for AI safety, my best guess of my sort of primitive biological neural net is that we should aim for maximum truth seeking and curiosity. That's my gut feel for how to make AI as safe as possible. The danger with programming morality and explicit, with an explicit morality program is what is sometimes referred to as the Waluigi problem. If you create Luigi, you automatically create Waluigi by inverting Luigi.
Starting point is 01:00:50 So I think we have to be careful about programming and an arbitrary morality. But if we focus on maximizing truth with acknowledged error, I think that's the way to maximize safety and also to have the AI be curious. Because I think that Earth is much more interesting to an advanced AI
Starting point is 01:01:24 with humans on it than without humans. I agree with you. Now, an interesting question of do you think with vast intelligence comes significant empathy and respect for life? Yeah, I think so. Because that's the hope at the end of the rainbow here. I don't want to use the word AGI. I'll use a digital superintelligence as a term. With a digital superintelligence that is able to be more benevolent and support us because sometimes i'm not sure us squishy meat sacks can
Starting point is 01:02:06 make it through our own our own uh horseshit problems that we put together um so maybe there's a value there um my you know i think on the whole ai is the single most important technology we ever invented and it is going to uplift all of humanity. I think it's what you said, you know, post-AGI comes abundance. I think it's the interim issues in the next one to four years, right? It's not artificial intelligence, it's human stupidity. Yeah. Well, I mean, one way that AI could go wrong is if the extinctionist philosophy is programmed into the AI, whether implicitly or explicitly. I mean, probably not explicitly, but there's a strong danger of an implicit extinctionist philosophy being programmed
Starting point is 01:03:01 into AI. What would that look like? What would that look like? extinctionist philosophy being programmed into AI. What would that look like? What would that look like? Well, like, there's this guy on the front page of the New York Times about a year ago. He's head of the Extinctionist Society. He was literally quoted as, there are 8 billion people on Earth. It would be better if there were none.
Starting point is 01:03:21 Oh my God. Yeah. So, if you take the extreme environmentalist argument, especially the implicit extreme environmentalist argument, there's an implicit conclusion that humans are a plague on the surface of the earth. a plague on the surface of the Earth. So I think we have to be quite careful about an implicit, like if the extinctionist movement was somehow programmed into AI as the optimization, that would be extremely dangerous. Yeah, to say the least. Yeah. to say the least um yeah and but you know there's there are people quite a few people actually who view humanity um as as a blight on the surface of the earth and there we're we are coming down to
Starting point is 01:04:19 there is the you know accelerationist movement the decist movement, the boomers and the doomers. But I think people forget to realize, or don't realize and forget the fact that we romanticize the past, and the past life was short, brutish, and you were dead by 40. The life that we enjoy today is a result of the extraordinary technology that we brought to bear. Yeah. enjoy today is a result of the extraordinary technology that we brought to bear um yeah so the like you know hobbes life is nasty brutish and short um i i actually i had a little yorkshire terrier once um who was uh nasty brutish and short and kept kept biting, so I called him Hobbes. Perfect. I would tell friends that came over, watch out for the dog.
Starting point is 01:05:12 And they'd look at this Munichi Yorkshire Terrier and laugh, and then I'd bite them on the ankle. And I said, watch out for the dog. You know, that extinctionist meme is the same sort of, you know that extinctionist meme is the same sort of you know you discount it uh until it starts um uh being a mind virus uh in part as you've called it and it starts disrupting us Yeah. If you convince people that we're running out of resources on Earth, that there are too many people in the world, and the only way to survive is to have far fewer people, which a lot of people believe to be the case. believed to be the case. And like I said, if that somehow gets programmed into AI, and that AI becomes the most powerful AI, then we're in deep trouble. Yeah, we need to counterforce the population bomb. I mean,
Starting point is 01:06:16 what an extraordinary disservice to humanity. Yeah, Ehrlich's book was a terrible nightmare. Maybe the most anti-human book ever written. Yeah, for sure, for sure. But give people hope on the flip side here. Those who are saying, again, going back to, I don't want to bring up my children, because I hear this all the time, and I'm sure you do too, in an age where AI is going to destroy us. in an age where AI is going to destroy us.
Starting point is 01:06:50 So short-term problems, long-term problems, short-term solutions, long-term solutions. You said make it curious, make it maximum truth-seeking, 100%. Is it okay to say that we're going to have issues in the short term and we're going to have to deal with them? Or do you think... There will be some issues. in the short term and we're going to have to deal with them? Or do you think... There will be some issues. I mean, it's interesting. At this point, no way to stop AI. It is accelerating whether people like it or not.
Starting point is 01:07:20 I mean, that's why, together with a number of really smart people, we created XAI. And, you know, hopefully some really smart humans will continue to join XAI and build what is intended to be a maximally truth-seeking and maximally curious AI. Anyway, I think that's really important. Elon, listen, on behalf of those of us, and I think everybody here listening who are pro-humanity and pro-uplifting humanity um and making us a multi-planetary species and uh living a longer healthier life and i the one place i disagreed with you on on uh on x is on uh you know having people live longer uh you don't people don't need to die for there to be new ideas you know that the ceo of ford and gm don't need to die for there to be new ideas. You know, the CEO of Ford and GM didn't have to die for Tesla to come into existence.
Starting point is 01:08:29 So we're, you know, I want to just say thank you for all that you're doing and setting a model for other industrialists and entrepreneurs out there to take on and solve big, huge problems. Well, thank you. I would encourage people to be optimistic about the future. Like err on the side of optimism. You will be happier for it. And as you pointed out, probably live longer. So I'm not totally against life extension.
Starting point is 01:09:09 I think we, you know, we want to be, I don't think we want to necessarily have people live forever. That's, we'll live for a very long, like thousands of years that I think that would potentially lead to ossification of society. It would, I think, lead to ossification of society it would it would i think lead to auspication of society um but but i mean solving dementia and um you know curing cancer i think are good things obviously um so so i'm not sure we're actually that far apart on the life extension thing. I think we're probably mostly in agreement. I just don't want to have like, you know, can you live for, like do you want Kim Jong-un living for a thousand years?
Starting point is 01:09:53 No, but I'd like Elon Musk and Peter Diamandis living for 150 years and a few other thousand amazing entrepreneurs out there. Sure. and a few other thousand amazing entrepreneurs out there. Sure. So it's, there is a challenge that, like a lot of people, they really never change their mind, they just die.
Starting point is 01:10:15 And this is actually, I'm not sure who originally said this, but even in physics, where, which is extremely rigorous, that, you know that you know physics in a lot of cases has advanced one death at a time yeah my my friend brian keating who's a astrophysicist reminded me of that and and maybe it was true but in this world of entrepreneurship, I think we do live in a meritocracy to a large degree and the best ideas can bubble up to the top faster than ever before, especially with, with AI now where you can build companies, you know,
Starting point is 01:10:59 extraordinary companies with a couple of people in a lot of tech. So, extraordinary companies with a couple of people and a lot of tech. So, yeah. I have one last question. And then I know it's dinnertime there. The future of XAI, what's your vision there, pal? Well, the other thing, I think the path to AI safety is to build an AI that is maximally truth-seeking with acknowledged error, that is maximally curious. And I think that is most likely to lead to a good outcome for humanity. Because we are much more interesting
Starting point is 01:11:46 than not humanity. Obviously, I'm a big fan of Mars, but Mars is much less interesting because there is no human civilization there. If you take humans, for example, we could hunt down all of the chimpanzees and kill them, but we don't.
Starting point is 01:12:12 And in fact, we make efforts to preserve their habitats. So, anyway, I think that that is the path to a great future and a maximally positive AI is to be rigorously truth-seeking, always acknowledging some amount of error, and maximally curious. And that's the goal of XAI. The company motto is understand the universe.
Starting point is 01:12:58 That's a good mission and one that's going to take a bit of time. I can't wait until the first AI is able to come up with new theories of physics and new innovations that's going to be I don't think that's far away and I think that's going to be one of the most awesome times ever to be alive true we definitely live in the most interesting times and I actually for a while I was
Starting point is 01:13:22 kind of depressed about AI but then And actually, for a while I was kind of depressed about AI. But then I kind of got fatalistic about it and said, well, even if AI was going to end all humanity, would I prefer to be around to see it or not? I guess I would prefer to be around to see it just out of curiosity. Obviously, hopefully, AI is extremely beneficial to humanity, but the thing that sort of reconciled me to be less anxious about it was to say, well, I guess even if it was apocalyptic, I'd still
Starting point is 01:14:00 be curious to see it. I remember I was at your birthday party at one of your homes here in LA before you sold them. And Larry Page was there and Sergey was there and we were having a conversation about living in a simulation. And the notion was, this is, we're in the 99th level of the gameplay, and this has got to be a simulation,
Starting point is 01:14:26 because why would it be alive right now in this single most interesting time? And then the only comment was, don't poke the simulation or it will end. Yeah. Well, if we are in a simulation, the way to keep the simulation going is to keep being interesting. So, like, humans run lots of computer simulations because we don't know what the outcome is going to be. And we're curious to see. we'll run lots of simulations, like TESS will run crash simulations and SpaceX will run rocket flight simulations. And we only stop doing the simulations when the outcome is extremely predictable and boring. Yes.
Starting point is 01:15:19 So if we're in some alien computer... That's a great argument, that is perfect. So if we're in some alien computer... That's a great argument. That is perfect. When it's absolutely known. If we're not entertaining enough to the digital gods or the universe gods, they'll end the sim. Yeah. We just need to make sure we keep the ratings up.
Starting point is 01:15:46 but this is why also why i think like one of the um ways to predict the future is that the most entertaining outcome is the most likely as seen by a third party as seen by as though we were in an alien soap opera i love that yeah like so it's not necessarily good for those in the soap opera like you could be watching a World War I movie and seeing people get blown up, and you're just eating popcorn and drinking a soda. But we're in the movie. Well, let's not get dystopian here, because I think we can have a really positive outcome of a lot of other cool stuff um yeah for sure i think most likely outcome is positive and and i think if you know and i think that's partly a self-fulfilling prophecy if you don't believe so if you believe it's going to be a dystopian outcome, then you're going to be back on your heels, protecting yourself, not investing in the future.
Starting point is 01:16:47 And it unravels. And so, you know, I think the message of this entire space is this is the most extraordinary time to be alive, the most exciting time to be alive. a time where a lot of individuals listening on the spaces have more power than kings or queens or heads of nations had just a few decades ago. And it doesn't take a government or a large corporation to solve a problem anymore. An entrepreneur with a few H100s can do a good job. Yeah, I think we should err on the side of optimism um we should we should err on the side
Starting point is 01:17:29 of optimism and we should have kids like uh you know to to to err is human and to h-e-i-r and to reproduce as human all right buddy, thank you so much for your time. Thanks for sharing. You're welcome. All right. Thanks, Peter. Take care, buddy. Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.