Moonshots with Peter Diamandis - The AI Wealth Gap: Why 40x Deflation Changes Everything w/ Dave Blundin, Salim Ismail, Dr. Alex Wissner-Gross | EP #208

Episode Date: November 17, 2025

If you want us to build a MOONSHOT Summit, email my team: moonshots@diamandis.com  Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends   Dave Blundin is th...e founder & GP of Link Ventures Salim Ismail is the founder of OpenExO Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund: https://qr.diamandis.com/linkventureslanding      Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy   _ Grab dinner with MOONSHOT listeners: ⁠https://moonshots.dnnr.io/⁠ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO  Connect with Alex Website LinkedIn X Email Listen to MOONSHOTS: Apple YouTube This week's outro song: https://suno.com/song/c9e369d0-b182-4114-9dae-9b8861164c52  – *Recorded on November 15th, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 So the number one concern globally is cost of living. And tied very closely to that is unemployment. Will I get a job? And then the third concern is poverty and social inequities. And we talk about a future abundance, we talk about demonetization, but this is the reality of what people are feeling. This is a story about preparing for the worst.
Starting point is 00:00:22 Maybe the worst thing any of us can imagine. The wealth isn't going to go to the people are doing the work or the people who get unemployed. employed, it's going to go to making the rich, richer and the poor poorer. You got a third of your income going into your phone and your data plan, and all that money funnels out of the country and lands in, like you said, Silicon Valley and Boston. And then you add AI as a layer on top of that, and the gap is going to get really, really wide. So that's the reality of a huge fraction of the world's population, though.
Starting point is 00:00:52 The question is, how do we help people believe in a hopeful and compelling future? because if they don't believe it's a hopeful and compelling future. Now that's the Moonshot, ladies and gentlemen. Everybody, welcome to our episode on WTF Just Happened in Tech. Here with my Moonshot mates, Dave, Dave, Celine Ismail. Good morning, Salim. And Alex Weizner Gross. So, guys, it's been quite a week.
Starting point is 00:01:23 You've been in Brazil, Saleem, just about. back? Yeah, it's just back from three days in Brazil. Turns out we have a massive viewership and listenership there demanding that we be able to translate this into Portuguese, so we should look at that. Okay, we'll do that. I just got back from Milan and Madrid, right? The middle of Europe talking about AI and we'll talk about it, but there's a lot of concern and angst about how far they're falling behind. And we've got a subject to discover on this episode as well on that. Dave, how's your week been? Phenomenal. I get to get to hang out
Starting point is 00:01:59 with Alex face-to-face, which is a rare treat and we brainstormed a ton of things going on. Can't wait to talk about them today. Awesome. I have a general complaint. What's that? You know, we, over the last few weeks since we've recorded, there's so much stuff been happening. I need an Alex,
Starting point is 00:02:15 Dave, and Peter AI next to me that just real time helps me interpret stories. We should maybe think about creating a GTP. Good news. It's in the queue, Saleem. We're We were working on it. I budgeted it and we're doing it. All right. All right. Alex, if this is the physical Alex or the AI Alex, I have no idea. But how are you doing, pal? What's the difference? Probably nothing. Probably not much. The world's convinced I'm an AI already. Well, we'll figure that out soon enough. So everybody, this is the news that's worth listening to. Hopefully, the news that helps you keep up with how fast things are moving and keep. keeps a positive vision of the potential in your mind.
Starting point is 00:02:59 As we say every week, I know we all spend at least 20 hours independently and together getting ready. So let's jump in. Let's open up with the hyperscalers news about Anthropic, Google OpenAI, still going on. In fact, this week in particular, a lot of news about Anthropic, which hasn't hit our WTF episodes in the recent past. Here we go. Anthropic overtakes open AI in enterprise LLM API market share. All right, over to you, Alex. What's the significance here? We see this chart.
Starting point is 00:03:34 Open AI's dropping Anthropics rising. What does it mean? I think the central question, Peter, is whether code generation is the critical path to recursive self-improvement. If code generation is the critical path, then one can expect amazing outcomes from Anthropic, which is quite publicly focused its strategy on code generation, perhaps to the exclusion of other modalities like, say, video generation like we see from Open AI with SORA. On the other hand, if code generation turns out to be missing some special sauce needed
Starting point is 00:04:06 for superintelligence, broad superintelligence, and recursive self-improvement, maybe this trend won't last. But I think that's sort of the core question here. Dave? I have a completely firm opinion on this. I'd love to, I don't want to lead the witch. goodness, but I could throw it out there first, Alex, or you can tell you. So what is the answer? Is the LLM scaling recursive self-improvement loop enough to crack to the singularity and
Starting point is 00:04:31 infinite intelligence or not? What's your guess? I don't know. If I had supreme confidence on this one, it would be far easier to make investments in this space. I can make steel man arguments on both sides. The steel man argument in favor of cogeneration as critical path to singularity. basically, to the extent it's a finite point, a fixed point, and not sort of an extended object. It would be something like cogeneration, we leverage that to rewrite the core algorithms and the key models and architectures and post-training architectures underneath frontier models, and that just sort of spins, the flywheel spins faster and faster. The steel man in favor of code generation missing something looks something like, well, maybe we need visual chain of thought
Starting point is 00:05:25 or maybe there's some grounding in the physical world that's essential for general purpose knowledge, general purpose reason that you can't just get from looking at large source code bases and the internet of text tokens and maybe a little bit of imagery. So I'm not sure. Well, I'm hardcore. Hardcore in the camp of that Dario is on the right track. And it's not just It's not just anthropic. Look at the chart. Google's also on this same trend line. And so it's either Dennis or Dario. And by scaling what they've already got and turning a huge amount of the compute internally and having it generate the next test, the next test, the next test, the next test. And I think to me, the tipping point there is humidity's last exam. And what we're hearing
Starting point is 00:06:08 and what you're hearing is that that will be saturated very soon, which is mind-blowing, given how hard those questions are. But to me, the solution to those questions and the innovation in AI are incredibly correlated problems I took a different take on this one I think what's interesting here is AI is actually showing that it has a real business model and and that will be a really powerful feedback loop going forward well the enterprises that I talk to the banks and the you know they're all using Anthropic because they trust it with sensitive data while open AI is going consumer you know and not not kind of
Starting point is 00:06:47 going after that corporate market. So they're both going to thrive, but a very different. Their liability that they're providing is amazing. All right. Well, let's go to the next story here on Anthropic. And again, congratulations to Dario. Anthropic projects $70 billion in revenue, $17 billion in cash flow in 2008. So, I mean, we haven't heard a lot about Anthropic over the last couple of months, right? It's been OpenAI and XAI and Google taking the headlines. But Dario, is gaining ground here. How do you see them competing against the other hypers, Dave?
Starting point is 00:07:25 I think he's positioned really, really well. You know, the numbers aren't as big as Open AI, but the enterprise market is wide open. And, you know, what Open AI is doing is directly competing with Google, which is very aggressive, very cool, but also risky. So I like the angle they're taking here because, you know, enterprises need AI. Using AI as a management tool is the gold mine of all gold mines. And I'd love to riff on that for hours, some other podcasts. But everybody doing that is using Claude and using
Starting point is 00:07:57 Anthropic as their backbone. And so he's not facing a lot of competition right now in that market. It's not as fast growth. It's not as sexy. But it's a really good strategy. And I think you'll hit these numbers. You know, when I was with my friends at Google, you know, there's an interesting point that they view Anthropic as the other, you know, friendly AI company, right? They're obviously at each other's throats between OpenAI and Google and OpenAI and XAI. Anthropic is sort of the friendly little brother to the other hyperscalers. Isn't it ironic, though, that Open AI's original mission was what Dario is now actually known for, you know, the machines of loving grace and the, you know, we are, if you want to work in AI,
Starting point is 00:08:42 but you want it to be guaranteed to be good for the world. come to Anthropic. He's really grabbed that high bar, that high ground on that topic. I think there's something fundamental, though. We've seen this happen over and over again that what becomes ultimately a frontier lab starts as an alignment lab. I think there's almost a perverse duality between alignment and capabilities. That if you're the world's best lab at aligning AI with human interests, that immediately, whether it's for economic reasons, like you need to raise capital in order to train super aligners, or just for purely technical reasons, that if you can align a model really well with human intent, that immediately
Starting point is 00:09:24 itself is a strong capability, I think every alignment project almost inevitably ends up as a capabilities project. So I think it's not just coincidence that Open AI started as an alignment-oriented effort to ensure that there wasn't just a global singleton in the form of deep mind for superintelligence. and anthropics, similarly, in the long tradition of Silicon Valley and the fair children also started as being alignment focused and then almost immediately pivoted to capabilities and superintelligence. I think that's just the law of economic nature here. Totally right. I have a counterpoint to that. You know, Facebook started off as being very aligned to protecting privacy, user privacy and never leaking privacy information. And then they sold it
Starting point is 00:10:07 for profit, not AI related, but just in terms of business. model related. So at some point, this could become extractive, right? One of the stories here that I want to hit on is the economics here. So interestingly enough, when I did some digging here, so Anthropic is projecting $70 billion in revenue by 2028 at a 77% profit margin, right? That's pretty extraordinary if they can hold on to that. On the flip side, Open AI is projecting $100 billion of revenue and unprofitable until 2009, right? I mean, just their deployment of capital into data centers and model growth. I'm super curious, the two business models, Anthropic versus Open AI, what do you guys think about that?
Starting point is 00:10:55 Well, I think a lot of these companies are capable of having high margins on short notice, including Open AI. And they're trying to tell the market, Open AI, in particular, that I intend to keep investing ahead of the curve. So if you'll give me the trillion dollar valuation, and that's a better, if you can pull that off, it's a much, much better. You can tell you having taken a company public, as soon as you switch to profitability, it's very hard to go back to your shareholders and say, oh, I want to burn a trillion dollars building a data center. So Sam is declaring that up front, which is great strategy as long as the shareholders believe in it. Which is what Bezos did. Yeah, exactly.
Starting point is 00:11:28 That's why Amazon is. Yes, exactly. Do you remember Bezos's famous newsletter at the beginning when he started? He goes, listen, I am not going to be profitable. I'm going to be spending money. If you want a profitable company, go someplace else. Otherwise, I'm buying, you know, customers. I'm buying revenue.
Starting point is 00:11:46 And he did. And then he flipped the knob. He flipped the switch. Yeah, that's exactly right. And what Dario will do here in all likelihood is declare this kind of margin, 77% gross margin. But then as that date approaches, he'll launch a new project under a new name and say, well, we're going to consume all that money, building Stargate or, you know,
Starting point is 00:12:04 with whatever, Anthropic Stargate or whatever it is. And that's a good strategic. shift. But, you know, before we leave this story, these numbers are much bigger than anything in the history of the world, much bigger, the growth rates and the scales. And I just want to make that point, because we get a near to these stories so quickly. And we get, oh, I already heard that. We get numb to the trillions, right? Numb. Yeah. Oh, and also in terms of life plan, you know, anyone who's doing something, not this, you got to, you got to consider how do I get into this? This is so much bigger than all other endeavor combined. The, the next
Starting point is 00:12:38 article here, which is a fascinating one, is Fei-Fei Lee's World Labs unveils world-generating AI models. And we had this conversation with Fei-Fa backstage at FII. Let's take a look at the video, because the implications of this are absolutely huge. Finally, I thought you bailed again. Please. I wouldn't. All right. All right.
Starting point is 00:13:34 If you were listening to this podcast and not watching that beautiful video, what you saw is an extraordinary, immersive, photo-realistic virtual world, a world model that Faye has been building. And I, for one, you know, I'm both fascinated and concerned. I'm fascinated by these world models because it's magic. I'm concerned that it's going to be where we spend a huge amount of our time.
Starting point is 00:14:06 I just finished reading a book called unincorporated man for the second time. And in this book, which is a fascinating conversation by itself, but in the book, the world has a crisis because everybody starts spending all of their time in these virtual worlds to the exclusion of work and the exclusion of eating. And it decimates the population. So, you know, I know, our kids, my boys right now, maybe your son, Salim, as well, a lot of time spent in video games. But if they become so photoreal, realistic, so immersive, and you are the God of your world. Why would you want to spend time anyplace else? I think it proves that we're living in a simulation. Well, yeah. I mean,
Starting point is 00:14:49 I tweeted that after I had the conversation with Fifei, which was, you know, I have no question we're living in a simulation. But even if we are, you know, what would you do differently? And people are just so tired of this conversation about are we living in a simulation? But before we get. Yeah. Go ahead, Dave. Yeah, before we get too deep into the story, because I really want to hear from Alex on, this is very, very different from what you think it is. But Alex will explain it to us. But I'll tell you on a personal note, when we were backstage with Fei Fei in Saudi, anyone who's an aspiring leader, entrepreneur, visionary out there, we're backstage. She's one of the gods of AI.
Starting point is 00:15:28 And she says, oh, Peter, Dave, let me, I'm so excited to show you. She whips out her phone. And she's showing us the product in like five seconds. And she's like a kid in the candy store. excited about what it can do. And that, that enthusiasm, but also ability to show your thing in under five seconds is so infectious. And, you know, at her level, the fact that she's still doing that, learn from that. Everybody should be able to do that. No matter what you're excited about, you should be able to project it in five seconds or less on your phone, pull it out,
Starting point is 00:15:58 have it ready to go. That's so cool. All right, Alex, talk to us. So the consumer story here is we're seeing the beginning of the holodeck wars. The technical story, story is underneath the holodeck wars, we're seeing different approaches for generating entire holodeck type simulations from scratch. We see on the one hand, the Google Gene E3 approach where every pixel is being generated by a single model. On the other hand, I think what World Labs is demonstrating
Starting point is 00:16:30 here with marble, their marble model, is sort of the opposite end of the spectrum. It's generating not individual pixels, but so-called 3D Gaussian splats or 3D Gs, which are sort of transparent blobs that cumulatively together build up into what looks like a photorealistic, 3D traversable world. But 3D Gaussian splats are so compute efficient that you can just dynamically recompute visualizations locally
Starting point is 00:16:58 on your computer, on your client. Whereas the pixel-wise generation approach of GNI3 and other competing models, those require compute-intense server-side GPUs. So I think we're starting to see the beginnings of almost an efficient frontier of trade-off between compute versus versatility. And they're going to be upsides and downsides to each of these. But in the same sense that we saw with frontier models, some models live on the edge that are relatively compute light, some are server-intensive. I think we're going to see a range of different levels of worlds that we're able to generate. And all of these, I mean,
Starting point is 00:17:36 the consumer use case, I think that this is chicken feed compared to the larger addressable market in my mind, which is using these models for training synthetic data, for more capable vision language action models, for robots, for scientific discovery. That's the much larger market, but this is fun in the short term. Our next story here, new method helps AI forget memorize data without losing reasoning skills. is a big deal. I'm going to go to you, Alex, first on this. Yeah, I often speak about, in the near term future, about reaching a diamond-like perfect micro model of a frontier model that externalizes almost all knowledge. So it's pure reasoning
Starting point is 00:18:24 model and all of the knowledge can live outside the weights of the model in some external database or some external tool call. And I think this paper by Goodfire, it's such a clever way to externalize all of that knowledge to distill down, no pun intended, to the essence of a core model toward this vision. The basic idea is to look at the weights inside the model and to distinguish which weights represent knowledge versus some sort of general reasoning capability by looking at which weights if training over multiple examples and running the model over multiple examples, which weights impact the overall so-called loss or the ability of the model to match the desired output if they're changed a little bit. And the weights that are important to generalization
Starting point is 00:19:15 will have dramatic impact on the overall loss of the model if those weights are just tweaked a little bit. So can I take a sort of a slightly different frame of this? So if you've trained your AI model on all of my company's health care data or all of my company's financial data. And do I trust you then? And it has some incredible results coming out of that. Do I trust you to not give up my core health care data or financial data while still adding value, right? So apparently what they're saying is they found a way to make AI forget specific
Starting point is 00:19:54 memorized content without retraining the model from scratch, without wrecking its intelligence. So you can forget the specifics about all the health care data, forget the specifics about all the financial data, and the model still delivers the same value. That's what I understood it to be. Alex, is that correct? Half correct.
Starting point is 00:20:13 So it is correct that this is a pruning or more generally a regularization technique for helping a pre-trained model to forget knowledge that one might call memorized. but that really the emphasis in the inflection is less about some sort of enterprise privacy or some sort of like ML data firewall feature and more about figuring out which parts of the modeling. Models are huge. Models are in many cases hundreds of billions, if not low trillions of weights. That's very compute intensive. It would be highly desirable to figure out which of those
Starting point is 00:20:49 weights are actually needed for general capabilities and which of them are just AIs memorizing arcane and quite possibly wrong facts on the internet or from an enterprise environment. So this is more about generalization capability, less about filtering out enterprise privacy. Is it also about making it a lightweight model to run on systems? That's the holy grail here. The holy grail is could we have maybe a sub-billion parameter, maybe aspirationally a million parameter model that's generally intelligent? That would be an incredible outcome. Wow. You know, if we solve memory in some of this, That's a huge breakthrough.
Starting point is 00:21:26 And then maybe the next thing you could add in some mechanism for adding in curiosity into the model because that feedback loop would be unbelievable. It's a whole cottage industry focused on active inference and building curiosity from scratch as an instrumentally conversion motivation into these models. Wait, instrumentally, can you repeat that? Instrumentally convergent?
Starting point is 00:21:48 Instrumentally converging. So instrumental convergence, very important term. Instrumental convergence is this idea that in order to achieve a variety of different long-term objectives, you're almost forced to achieve common short-term objectives. For example, if I have a superintelligence, and on the one hand, it's instructed to build lots of paperclips. On the other hand, it's instructed to cure cancer. Probably both of those long-term objectives require that in the short-term, it accumulate capital, maybe solve science, maybe build a bunch of factories. So those are instrumentally convergent
Starting point is 00:22:20 motivations. Instrumentally convergent near-term. Yes. Convergent in the near-term, divergence in the long-term. Yeah. Well, high-level observation. This is so ridiculous. I learn something magical every single time. Every single goddamn slide, for God's sake. Well, this topic
Starting point is 00:22:38 is incredibly important and somewhat obscure, but there are many, many, many people working on outside the box chain of thought reasoning and vertical use cases, but very few dorking around with the weights. But there's so much opportunity when you're messing around with the weights. I mean, these 90% reductions in parameter count through distillation are pretty common. And you know, you think about, okay, hey, we need 100 gigawatts of power. Oh, wait, I distilled it by 10x. Now we only need 10 gigawatts.
Starting point is 00:23:03 The implications of that are trillions of dollars. So there really ought to be a lot more people playing with the open source weights and trying, you know, because these things are really working. And this is a great case study. So this really gives you the brain equivalent of neuroplasticity. And also, you know, the psychology researchers, who for years have been trying to deal with, you know, surveys and outside the body tests, you can make so much more progress than understanding the nature of intelligence by playing with the weights of a big neural net and trying things like this. So a lot of more people in that area, cognitive psychology types, should be working on this too.
Starting point is 00:23:38 All right. Here's our next article, which is related but different. And I think more about neuroplasticity. So Google introduces nested learning, machine learning paradigm for, continual learning. And as I read into this, I'm like, wow, this really is a big deal. Alex, I'll go to you first on this one. Yeah, so this is a pretty dense, if I may say so, Nurep's paper,
Starting point is 00:24:06 Nureps, the arguably premier AI machine learning conference, parenthetically, it's happening in early December. Definitely, I'll be there. Many folks I work with will be there, encourage the community to reach out to me if you're going to Nureps and would like to connect. But the core thesis of Google's nested learning paper appears to be one focused on higher order meta-learning. So meta-learning, as a reminder, is learning to learn.
Starting point is 00:24:34 It's of core interest to the AIML research community. If you could learn to learn, that almost obviates a lot of machine learning research. So this is focused on higher-order meta-learning, rather. So learning to learning to learn and so on. And the core insight here is that models and model architectures on the one hand and optimizers that right now we use to train the models may actually be two facets of a common object. It's almost reading the paper, it's almost aspirationally seems to be fishing for a grand unified theory of machine learning,
Starting point is 00:25:12 where an M theory of machine learning, if you will, where all of these different processes are actually just facets at different levels, different orders of abstraction of a single common paradigm, which is, as I and others have argued in the past, compression of information. So I like to say, if we could send a message back in time and explain how we got to AGI, turns out it's very easy.
Starting point is 00:25:37 You just take a large amount of knowledge about the world and you compress it. And if you compress it beyond a certain point, you get some sort of phase transition and general intelligence. pops out. That's basically, I think, the story of AGI. When I think about college, I think the entire college experience for me was learning how to learn. Everything I learned was, you know, irrelevant, you know, some number of years later. And so when I read this, Alex, what I'm seeing,
Starting point is 00:26:06 and I'm curious, it's about enabling continuous human-like learning, right? Sort of a step towards lifelong learning. Yeah, a step towards truly a adaptive, lifelong learning for AIs. Because historically, what we'd train up an AI system and you would freeze it and then you'd do inference on it. Here, this is something that's continuously learning in almost a human-like fashion. My college experience wasn't about learning how to learn. It was learning how to avoid most of the bullshit that was taught there.
Starting point is 00:26:42 Okay. Learning how to forget. There you go. Dave, any thoughts on this particular article? Not yet, but I think Alex's explanation was perfect anyway, but I think that one thing people overlook is that the machines doing this have access to incredible numbers of tools. And when you're learning how to learn,
Starting point is 00:27:04 you're thinking about never forgetting, but you write things down, you refer to them, you have your laptop, you have your phone, and the AI version of that has immense bandwidth between its brain and its notes. And so a lot of the innovation is taking advantage of that incredible, you know, it's clearly super intelligent before it's even, you know, intelligent. I have one of the point on this.
Starting point is 00:27:29 That little explanation by Alex, I'm going to have to go back and listen to it like four times over, just not parse, all the stuff. We don't need to get into it here. But we may need a whole podcast episode just on that. I love that. It's kind of pretty fundamental, right? Yeah, one of the things we talked about was also getting questions from our, our, our listeners and doing an AMA session based on those questions.
Starting point is 00:27:50 So if folks are interested in doing that, you know, drop us some hints in the chat. And drop us some questions in the chat. Like, what do you want us to focus on? We'll do an AMA session. There's just so much news week to week to cover. It's like otherwise we're going to be publishing an episode every day. So an Alibaba-backed company called Moonshot AI is launching a new ultra-low-cost AI model. As I read this, I'm like, wow, this is a big deal.
Starting point is 00:28:21 They clearly watch the pod. They watch the pod. Why? Because they called it moonshots? Yeah, the name. Okay, we're going to claim. I think Astro Teller claims moonshots as the captain of moonshots at Google. We'll try to enforce your trademark in China anyway.
Starting point is 00:28:39 Dave, what's your take on this one? I hope people appreciate what a huge deal. this is, this is the, in my mind, the biggest thing that happened in the last month. So, you know, these Kimi models actually are top of Sweebench, right up there with Anthropic, you know, just 1% or so below. But they run on GROQ, not Elon, GROQ, not Elon, GROC hardware, which, you know, we learned all about in Saudi Arabia at blazing speeds, incredible performance.
Starting point is 00:29:10 And they're the best open source models. So if you want to play with the weights, you know, you've got to come with the, the Chinese stuff, because, you know, meta stopped open sourcing, open AI stopped open, open weights, open sourcing. So now these are the absolute best models in the world, and they're all coming from China. But the fact that you can train it, $4.6 million to train it is within the budget of almost any company. And it's not 10x. It's more like 30, 40x cheaper than what it took for Open AI and Anthropic to build their original models. And so part of it is you're drafting off their innovations, which is why those companies stopped open sourcing.
Starting point is 00:29:43 but you know you can grab these weights you can grab this open source and you can build on top of it and so just be careful there's no spyware or anything in there but you know i checked and i'm using it and uh it seems to be okay uh but but do your own spot checking but this is a huge deal uh in terms of the power that somebody wants to play with the guts of one of these things the amount you can get done on a limited budget just skyrocketed and you know what i found most significant you know there's been a conversation going on we heard eric schmidt talking about about this, that it's the U.S. financial markets and their efficiency that are allowing these hyperscalers to raise billions of dollars to do what they do. But if all of a sudden
Starting point is 00:30:24 the cost of training a trillion parameter model is $5 million, you don't need efficient capital markets. That money is available from a lot of locations. Isn't that wild? I mean, it's so right. I mean, just think about that. I don't know if anyone appreciates what you just said, the implications are massive, just absolutely massive. Alex, what's your take on this? Yeah, a few takes. One, as Sam and others have pointed out, there's hyper-deflation going on right now on both the training and inference side for AI. So Sam's number is 40x year-over-year hyper-deflation. So on the one hand, I'm not surprised. Wait, wait, one second. 40-x hyper-deflation year-over-year on the In cost of intelligence per unit of intelligence.
Starting point is 00:31:15 That's insane. That's a big, that's, so. So this is, I mean, I think this is like 20 times faster than more slow. Yes. This is why I speak of like the innermost loop as a catchphrase. Because when you see, if we can see sustained 40x year over year hyper deflation in cost of intelligence, everything else is going to get dragged down. The price of everything else, rather, is going to get dragged inevitably down with that.
Starting point is 00:31:40 So this is sort of, this is the nuclear core, if you will, that's going to pull down the cost of everything else. And because the demand is growing like a thousand X a year, that's just what just of has the capital expenditure. Yeah, pick your analogy. Yeah. Well, you look, when you walk around academia or corporate boardrooms, you find deniers everywhere. I mean, just absolutely everywhere. And a huge amount of the denial is, well, I tried this yesterday. It was hard.
Starting point is 00:32:08 It didn't work, you know, so therefore we're nowhere near AI. When you talk about 40x hyperdeflation, deniers are saying, well, look, the evidence is that as we scale these things, they're getting decreasing improvements in intelligence. So I don't think there's anything, you know, 40x for two years in a row, two back-to-back 40xes, it's not going to do much. Like, that's a really, really risky position to take, dude, because there is, there is significant upslope in the data, and you don't know what 40x is going to do. But if you had to bet, is 40X going to be mind-blowing or is 40X going to be not much of an improvement? You're crazy to take the position that it's going to be a little bit of an improvement. Crazy, especially back-to-back 40-Xs. Yeah, no, I think we're going to start to see grand challenges in math, science, engineering, medicine, start to fall over the next two to three years, thanks to that.
Starting point is 00:32:59 So can I take the other side of this? Yeah, because I will beat the crap out of you. Let's do it. No, no, I think if you look at the demonetization curves over history with our... other stuff like solar energy, cost of compute, et cetera. You would almost expect this because you want to see that curve go in this direction. 40x is like way faster than I thought. But I'll give you the macro counter example.
Starting point is 00:33:24 I remember when the Google car first came out, all the car manufacturers said, well, that's bullshit. The cost of that lithium ion battery is too expensive. It's never going to work. And they all ignored it. And then literally over the decade, the cost of lithium ion batteries, up 90%. That's what Elon banked on for building the Tesla, was the bankie on the cost of the batteries dropping. And he was right. And all the carmakers and all the broad, the typical mind of folks. So there's a powerful lesson here. Always watch for those deflationary curves and go
Starting point is 00:33:56 where the curve is pointing you. What's surprising is the world's most powerful technology, right? By far, you know, if I had gone back five years and described what an AI model could do today and how much would you charge for it, you know, per day or per month, I would have guessed, you know, millions of dollars or hundreds of thousands of dollars and would never have meant, you know, expected it to be free. I mean, it's effectively free. We have to eat our own dog food and kind of go, this is where the curve's going. Let's expect this at this point and see if we get it right. Yeah. Yeah, I'm constantly running around the office and saying, guys, you know, within these virtual areas like computing and AI, it's really hard to visualize 40x. But if we
Starting point is 00:34:36 imagine we had a factory that makes widgets or cars and we've 40xed our production year every year everybody in the office would be going crazy you'd see like 40 times more stuff coming out the door it would be obvious to you that's what's happening and you have to really stretch your brain around the implications of this because there's a great quote on the demonetization and the fact that the energy guys all got it wrong and whatever this is that same issue right and to Alex's point of the inner loop if you rolled this out to the broader kind of macro things that we do, you should expect
Starting point is 00:35:10 a 40x drop in the cost of health care and in food production and everything over time as this bites, correct? That's my expectation. Assuming intelligence can solve the problem. It was a great quote from like Gordon Moore and I'm going to botch it. I don't have the exact numbers. He said, you know, if
Starting point is 00:35:26 cars had improved at the same rate as Moore's law, you know, rolls, rolls would get like a million miles per gallon and would, and you throw it away because it's so cheap at the end of your trip. Yeah, the stat I heard was that if the top speed of a car and if we all evolved at the same speed, we'd have a car today that went faster than the speed of light.
Starting point is 00:35:47 Yes. All right. I want to jump into this next article, especially coming back from a week in Italy and Spain. The story headline here is Brussels to loosen GDPR rules to enable and feed the AI boom. So, you know, really important conversation. There was a lot of concern, angst when I was meeting with CEOs, consulting companies, investors in Europe, in Italy, and in Spain, right? The GDP of Italy, about $2.7 trillion, about the size of New York, the GDP of Spain, about $1.7 trillion, the size of Florida. but serious concern about can they compete and and this is one of the biggest issues that's
Starting point is 00:36:38 keeping them from competing ability to access data selim what's been your experience here you know this is part of these problem with over regulation is you just slow this down the cost of compliance for stuff like GDPR has proven to be a ridiculous thing and this is a function of the historical issue with Europe. And to be fair, it's not just mindset here, although there is a far period. There's just a bunch of historical legacy that's really important. I'll give you a specific example. After World War II, the German constitution added a clause saying no media organization could cover the whole country so that you could never have another rise. And that prevented a regional player from covering the country and then Google came along and rolled up the whole thing. Right.
Starting point is 00:37:23 And so they've got structural issues going back in history and you have to figure out how to undo some of those. And that's a really, really hard thing to do. They're doing kind of the best they can in a difficult environment, but it's just a massive problem. I mean, for the longest time, and you and I remember this at Singularity University, right? They, you know, Europe in general prided itself on having the strictest privacy laws out there. And a lot of that simply meant that, okay, we're going to exclude everybody from Europe in a U.S. product or service. I think what's interesting is that the data showing that venture funding in Europe dropped up to 30% as a result of this and that the AI models in Europe have been six to 12 months slower to market than the United States.
Starting point is 00:38:09 And the compliance burden, so get this, so there is an AI audit that's required before you put out a product. And those audits on average cost $260,000 euros and take 8 to 15 months, right, delaying 40% of projects. So imagine that. And so you have to actually prove during your compliance audit that, you know, they review the data sets, the model transparency, bias, you know, documentation, safety standards. So some third party is auditing you and just putting sand in the gears. Now, I understand why people want that to some degree, but you're trading that against your economy. It's been really interesting. If I think about my grandparents, you know, my grandparents assumed that there was no structure in the world as chaos.
Starting point is 00:38:58 And, you know, the military protects us, but other than that, it's just a zoo. And then when you look at the generations, if I talk to my kids, they assume that there's some rational thing out there that's thinking through these issues. But there isn't, we know there isn't. They assume there are adults in the room. Is that a strange thing when all of a sudden you say, oh, my God, I'm the adult in the room? It's really, really freakish when you hit that. Yeah, it's why I'm hoping for a benevolent superintelligence to actually be the adult in the room someday. Yeah, we need Mo Gadda's thinking, move as quickly to AI running the world as possible.
Starting point is 00:39:33 That should speed up Alex's inner loop considerably. Alex, any thoughts on here that GDPR rules being changed? Maybe just a broad comment that under the current construct, it's up to individual sovereign countries to define the parameters of how much they want to participate in the superintelligence explosion and maybe just leave it at that. Yeah. I mean, the conversation I had with a lot of the leaders in the tech industry in Italy and Spain is, okay, if you guys are interested in playing this, you need to build out your energy sources, identify where you're going to set up your data centers. and I think the time frame, and I'm curious what you guys think about this, I think the time frame for make those decisions and implement those decisions is the next five years. I mean, I think the next five years are going to set the objective for the next century.
Starting point is 00:40:24 More like five months than five years. I think it's a little bit shorter than five years. Okay, I was being generous because you can't do anything in five months, which is a big concern, and we can talk about that in the U.S. even, and when we get to the conversation on energy. This is my chance for Iran. and I want to share something that we found when Salim and Dave and I were at FI-9. So I'm on the board at FIII, and it's the Future Investment Initiative.
Starting point is 00:40:52 And one of the things they do every year is they do something called the Priority Global Survey. And this is a survey that they do in 32 countries. They have over 60,000 respondents. And it represents, you know, three quarters or two-thirds of the world's population. And, you know, we've had some criticism on this pod that we're, you know, not focused on the reality in different parts of the world. So I want to discuss the reality. What are people seeing and feeling outside of, you know, Silicon Valley outside of Boston? So here's some of the data, and I'd love to discuss it with you guys, because the data is important and it's concerning.
Starting point is 00:41:37 so they surveyed and asked the question about what are your top concerns and we see this across the global south and the global north so the number one concern globally is cost of living by far can we afford to live in this world and and tied very closely to that is unemployment will I get a job which if I get a job will it pay me enough to live and then the third concern is is poverty and social inequities. I mean, this is what two-thirds of the world is feeling right now. Here's the next chart, and we can look at it by region, Africa, Asia, Europe, Mena, North America,
Starting point is 00:42:21 Oceana, and South America. And here we see, you know, Africa is number one concern is unemployment. The rest of the world, it's the cost of living. It's just expensive to live. And we talk about a future abundance, talk about demonization, but this is the reality what people are feeling. What are your thoughts here? Selim going to jump in first? Yeah, I mean, look, this is an extrapolation of the basic
Starting point is 00:42:50 nature of human reality. We've been living in fear since the beginning of time, right? In the cave dwelling days, you're worried that a hyena will come and steal your baby at night. Now we're worried about jobs. The idea, I think we need to flip over completely to a UBI type structure to navigate the world going forward because the concept of a job is going away. I mean, think about the idea that all of our education systems are designed to take a young child, train them through the early 20s to be ready to be for a job market, that we have no idea what a job looks like in five years. So this may be the thing that breaks the educational model, and it breaks all the other models into a totally new reality where most of the mechanisms for subsistence, education, healthcare,
Starting point is 00:43:34 etc basically free take that 40x curve applied to some of these domains that's for me the incredible opportunity so i'd flip this into the massive opportunities there but people aren't seeing the what's going on therefore they get stuck in this i got it but you know it's it's not evenly distributed yet this is the reality that people are feeling right now they're feeling fear about can i get a job and can i afford to live it's a very real concern this is our job as leaders and podcasters and message conveyors to show that. Like, take Amjad, you know, little developer out of Jordan, boom, builds a multi-billion dollar company. Take Vitalik, take Elon, coming from nothing to building global paradigm changing things just from mindset. And therefore, now, the inner loop, I'm going to go back to that again,
Starting point is 00:44:21 is just literally mindset and entrepreneurship. And Peter, you talk about this all the time. I flip this around and see the opportunity in this. I'll give you a little snapshot on what a big part of the world looks like. One of our summer interns, she's from Iran, her, her parents are still in Iran. I grew up as a young child in Iran. And she said that her parents spend one third of their annual income on their iPhone and data plan. And like, whoa. But she's like, look, you can't live without information. And actually, the currency is no good. So everything's Bitcoin. So how are you going to manage your Bitcoin without an iPhone? So you got a third of your income going into your phone and your data plan. And all that money funnels out of the country and kind of lands in like
Starting point is 00:45:04 said, Silicon Valley and Boston. And so that wealth disparity, you know, just from the phone, you know, and then you add AI as a layer on top of that, and the gap is going to get really, really wide. So that's the reality of a huge fraction of the world's population, though. I hear you. I don't know if you want to add anything, Alex, on this, but this is one of my biggest concerns, right? You know, this data for me is worrisome. You know, I'm clear that we are going to get to an abundant future. Maybe it's a decade out. You know, we're going to have a continuous demonization and all kinds of, you know, uplifting of health care and education by AI. But I think in the next two to seven years, that's what really concerns me, right? Where if people are, if young men are not
Starting point is 00:45:52 getting jobs and people are losing their jobs as a result of this, before we sort of flip the economics into an abundance model, you know, the question is, how do we help people? people believe in a hopeful and compelling future. Because if they don't believe it's a hopeful and compelling future, they're going to believe what they see from Hollywood, which is dystopian AI's and killer robots. I think you've hit the crux of it, right? How do we get narratives out there that demonstrate that future and do it fast and overcome the fact that people are 10x more likely to listen to fear stories than stories of a positive
Starting point is 00:46:27 future? And that has to be overcome. Therefore, you need 10 times more stories on the positive side to overcome. And people worldwide are really worried. One of the positive, consistent piece of feedback I get about this podcast is the fact that we're relentlessly optimistic about the future. Why? Because technology is a major driver of progress in the world and maybe the only major driver of progress. And now that's moving exponentially.
Starting point is 00:46:52 Do you guys remember what we were talking about before we hit record on this pod? The idea that it would be amazing to bring together a community of builders and coders and entrepreneurs to work. on uplifting humanity in the near term. You know, I think we should, I think we should do that. I think we should pull together this moonshot community and, and see if they want to discuss how do we make the world a better place, right? How do you, you know, how do we take, you know, how do we build moonshots that really uplift mindset, but help address unemployment, cost of living in the near term?
Starting point is 00:47:33 I mean, Elon's built an incredible community towards going to Mars. Satoshi, you know, created an incredible community on Bitcoin and crypto. Yeah, I'm talking about, you know, do we organize a meetup of the Moonshot listeners? Do we pull folks together and talk about, you know, solving grand challenges together? Yeah, I spend a lot of time with college undergrads and seniors, and they would flock to that mission like you wouldn't believe. You'd get incredible talent coming to that. mission. You know, when they're in their early 20s, mid-20s before scar tissue of life has accumulated too much, they are all in on that. And so you'd get really, really smart people
Starting point is 00:48:14 working on it. I mean, bring together the builders, the visionaries, you know, the folks who want to really build. You know, I like to say that the world's biggest problem is the world's biggest business opportunities, want to be a billionaire, help a billion people. I mean, that's the conversation. You know, and I know none of us have extra time to actually pull an event together. But if folks listening to this, I think it's mandatory because the only way you're going to change the world is to have people shift their mindset, come to listen to stuff like this, and that actually activate it and go do something. So imagine we did an event, brought everybody together, talked about things, and then people actually activated online, formed teams
Starting point is 00:48:51 and went off and did stuff. And then we tracked that over time. That would be kind of pretty cool. All right. I mean, so here's, you know, Alex, are you in on that? I think it's a benchmark problem. I think it's less about events and less about teams and more about just rigorously defining benchmarks for all of these problems. How about a benchmark for cost of living that then the world and this 40x year-over-year hyperdeflation of intelligence can optimize towards. Same with crime and delinquency and health care, cost of health care. We're going to be drowning in humanoid robots that are generalist in terms of their capabilities in the next few years. We'll talk about benchmarks.
Starting point is 00:49:31 We'll talk about the benchmarks then at this. So, I mean, if you guys are in, and my feeling is, you know, none of us have time to put an event together. But if there's interest in the community, so to everyone listening, this is what we talked about earlier, if you have an interest in joining us at some kind of a moonshot gathering, a moonshot summit, whatever it is, if we can get enough of you, let's say a thousand who say, yes, we want to do. this and you want to spend time with the moonshot mates then we'll pull this together here's what i'm here's what i'm proposing uh we'll set up an email uh let's call it moonshots at dmandis.com if you're interested in this idea of a a moonshot summit to bring everybody together talk about these world's biggest problems talk about the benchmarks talk about the moonshots required send us an email and if we can get a thousand people who say they want to be in on this
Starting point is 00:50:29 then we'll pull it together. We'll bring together moonshot maids. We'll bring together sort of the most exciting CEOs and moonshot engineers and have an epic two-day event. I think two days is the right length for this. Can I riff on this for a second? Yeah. If you take Hans Rosling's work,
Starting point is 00:50:51 which showed that over the last hundred years, we dropped the cost of electricity, transportation, telecommunication by thousands of times each, right? And then you say, okay, we want benchmarks that in the next two, three years drop those by 1,000 times each. If you extrapolate the 40x, that then gives you the target to go after to the benchmark comment that Alex made. And then you basically bring in the sage engine to say, okay, what policy changes do you need to make? If technology can reach this, how do you get this implemented? You could bring that together and make that a showcase for the world in a very powerful way.
Starting point is 00:51:27 I think it would be incredible. I really would love to get everybody together and have that conversation and really ignite a passionate interest among entrepreneurs to focus on this because there's real challenges out there in the world. All right. So here's the deal. If a thousand of you who are listening, you know, want to join us. Let's say sometime next fall, then send us an email, Moonshots at DM. amandis.com. And if there's enough interest, we'll pull this together. All right. Let's get back. There's a lot, a lot more to cover. All right. Our next segment here is data centers, energy, and space. So multiple data centers are reaching one gigawatt in 2026. You know, we're tiling the world in data centers, Anthropic and Amazon, XAI, Microsoft, meta, and Open AI Stargate. Alex, what's the story here? Well, I think the trillion dollar question, Peter, is will we see a peak in the amount of coherent power needed for coherent training
Starting point is 00:52:37 runs of large frontier models? If we do, one could imagine as incredible as it may sound looking at this curve where everything's going up into the right in terms of total facility power, we might actually see a peak maybe in a few gigawatts sometime over the next few years and then decline, if there are algorithmic innovations that enable us to do distributed training runs rather than needing one large power-intensive, coherent supercluster to do it, tiling the earth could look like tiling the earth with relatively lower power-density compute. Totally imaginable. On the other hand, if this looks a little bit more, I spoke earlier, about AGI being essentially, as it turns out, compression of information, if it turns out that there
Starting point is 00:53:23 are further phase changes that we can achieve by compressing more and more and more with larger and larger facilities, then maybe eventually an extremist, we end up in sort of, as I've spoken about on the pod previously, maybe more of a black hole, desktop black hole computer regime where we're just building these incredibly power dense facilities to train more and more and more. Again, I could go either way on this, but I think that's like the trillion dollar question. Will this peak or not? Besides peak data centers, a question is we're going to peak energy? That's a question for you, Alex. So the U.S. government, Brookfield, and Camico have launched an $80 billion partnership to build nuclear reactors. You know, as I
Starting point is 00:54:07 researched this, what I found frustrating is the time frame for building out these nuclear reactors is still, you know, an order of five years to 10 years. Alex, what are you seeing here? Yeah, I've gotten probably quite a bit, surprising amount of feedback from the community and the audience reminding me that I shouldn't ignore existing Generation 3 plus nuclear reactors in favor of SMRs and fusion reactors. So I want to make sure I just nail this point. There are right now at least six AP 1000s. These are made by Westinghouse, which ironically went bankrupt in 2017, building a couple of these. In Georgia and South Carolina, now it's hot again because superintelligence is hungry for power. And now it's incredibly valuable.
Starting point is 00:55:00 They cost maybe about $7 billion to build. So $80 billion partnership to build these, maybe build 10 reactors all across the U.S. This is going to be a very big deal. And critically, unlike SMRs where there are maybe only two or three and they're relatively emerging technologies, this is by comparison, it's a relatively mature format for nuclear power. And I think when we talk about the bridge to power for superintelligence from nat gas to nuclear vision to nuclear fusion with solar sprinkled and solar plus batteries sprinkled throughout, I think Generation 3 plus reactors like the AP 1000 have a very important
Starting point is 00:55:40 role to play. So I said to appease the audience that I'm not ignoring Generation 3 plus. No, but I and I think that's really important. These AP 1,000 are 1.1 gigawatt power plants. You know, when Eric Schmidt was testifying in front of Congress, he said we need 92 gigawatts by 2030. So this particular deal might put 10 of these 1.1 gigawatt data centers on the map. But they're not going to be coming online until into the early mid-2030s. So the question is, how do we build out an additional 90 gigawatts in the next four years? Where is that going to come from? Yeah, I think the deal structure behind this is worth understanding, too, because it generalizes to solar and to, you know, to fusion and to everything else.
Starting point is 00:56:28 So what's happening here is a company Westinghouse that got bought by Toshiba. So this is part of America deindustrializing very stupidly for decades. Toshiba buys Westinghouse. Westinghouse tries to build nuclear facilities. The government is so bureaucratic and so onerous that it goes bankrupt. So then in 2017, the private equity guys come in. Brookfield led and say, okay, we have very smart business school majors here. We'll try and revive this thing. And the timing is 2017, right when the transformer comes out. So the timing
Starting point is 00:57:01 turns out to be perfect. So now what's happening is the private equity firms and the, you know, the econ majors from all these schools are going to the government and saying, give us 10, 20, 30 billion dollars in loans, guaranteed loans, and we'll use that to build these facilities. And then if they're successful, we make a huge profit. And if they fail, you know, we're right off the loan. So there's not a lot of downside. Many, many, many econ majors and business people should be shifting into this area because the government is open for business now. But that's the structure. You go to the government. You get the loan. You build the next big thing. The next big thing, if it succeeds, you get the profit. You get the margin. And the government subsidizes it. So it's a golden era.
Starting point is 00:57:45 because a lot of people, you know, when I'm lecturing on campus, all the AI people, all the computer science people, they know exactly what they want to do. But then all the econ majors and business majors, like, how do I get in this? How do I get in this? This is how you get in this. The flow of capital is in the, it's going to be $1.2 trillion a year by 2030, coming just into data center construction and power for it. There's nothing even close in the history of the world to that scale of money movement. So just inject yourself right into it. Building out the railroads, right building out the telecom networks those were those were significant just not these dollar figures because i have a clarifying question i have a clarifying question to ask here in our last
Starting point is 00:58:22 part we talked about the fact the u.s is building 5 000 data centers 10x more than anybody else does that is that inconsistent with the amount of energy available so we have 5 000 no it's today it's the data centers we have today of all types not just a i data centers or 5 000 more you know cumulably more than the rest of the world combined To me, this is one of the few things that's easy to predict. You know, everything is changing so quickly. But the chip fabs are exactly what they are. We're building them at a certain rate.
Starting point is 00:58:53 Every chip is going to get used. The chips have a certain power consumption. That's very calculable. You can assume that they're all going to get flat out sold out, but we can't make any more of them than 20 million GPUs this year, and then it'll expand at some rate. So working back from that, you can exactly predict the flow of capital required to build out this entire infrastructure.
Starting point is 00:59:13 And it's usually under capital. The age of category three nuclear reactors, 20 years. Three plus. Three plus. Yeah. Three plus. Fair enough. And for what it's worth, for those who are looking at this on YouTube, the visual format,
Starting point is 00:59:29 the form factor of these plants actually looks like some sort of hybrid between what you're seeing here with the conventional older generation cooling towers and the newer SMRs. You could be forgiven for mistaking it for a number. normal building. One of the issues here is the U.S. public with the old, the original Gen 1 and Gen 2, Three Mile Island, Fukushima being worried about these plants, not in the backyard, but the three plus are fail-safe nuclear systems that, again, I'm happy to have in my backyard. We've got to change the narrative and we've got to accelerate this. Even companies that are
Starting point is 01:00:08 are bringing back online previous nuclear power plants, it's taking five years plus to get them online. The timelines are just too long. And Dave, the point you're making is even with this, we're like one-tenth of the rate that we're really needing, therefore this was a guaranteed boom. This episode is brought to you by Blitzy, autonomous software development with infinite code context.
Starting point is 01:00:33 Blitzy uses thousands of specialized AI agents that think for hours to underline, understand enterprise scale code bases with millions of lines of code. Engineers start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and pre-compiles code for each task. Blitzy delivers 80% or more of the development work autonomously, while providing a guide for the final 20% of human development work required to complete the sprint. Enterprises are achieving a 5X engineering velocity increase when incorporating Blitsey as their pre-IDE development tool,
Starting point is 01:01:14 pairing it with their coding co-pilot of choice to bring an AI-native SDLC into their org. Ready to 5X your engineering velocity, visit blitzie.com to schedule a demo and start building with Blitzy today. All right, let's jump into a conversation on robotics, and in particular, robot drone. So China just broke the world record on the number of flying drones. 16,000 AI-powered drones flying together controlled by a smart AI system. Let's take a look at the video. So if you were watching, you saw a beautiful drone. drone show in the sky. If you were listening, you heard some music. But here's one interesting
Starting point is 01:02:10 concepts. Imagine being able to have 16,000 drones up in the sky. You can actually create a giant TV screen and watch a television program or a movie across an entire city. What's the significance of this for you, Dave? Oh, big time. I think that people visualize robots in human-eyed form, you know, just because that's what's in the movies, constructing building. and cleaning your yard and whatever. But I think the swarm version of it is actually just as big a deal, if not bigger. And it's been very hard for the Hollywood studios
Starting point is 01:02:45 to create swarm visual effects, so they don't use them much in sci-fi, and therefore people don't really think about them. And then when you look at a flock of birds or a bunch of bees, they're actually not coordinated. They're a little coordinated, but they're not really coordinated. The AI version of it, as you saw in that video, is perfectly down to the millimeter.
Starting point is 01:03:05 coordinated. And it's a very effective way to do things like construction, yard work, you know, cleaning your gutters, whatever, because, you know, you can put 50 drones in if you need to pick up something heavy. You can put two in if it's light. So it's very, very flexible. So I think that, you know, it's going to be a big, big part of taking AI and making it affect the physical world much more than people are currently predicting. Because if you look at our podcast, we must have had 50 videos on different humanoids dancing and fighting and whatever. But I, I, I, you know, I really feel like the drone part of it, or the interactive thousands of drone part of it is way underappreciated as a real world thing to do right now.
Starting point is 01:03:45 I still loved the episode last time last week about a mosquito-killing drone, just flying around and zapping mosquitoes in your backyard. You know, drones are becoming the front line for the warfare in Ukraine. I mean, it is already. I mean, that war is being prosecuted by half a million drones on either side. It's not a man thing. It's just drones killing. And those drones are being, to a large degree, being manufactured in Ukraine.
Starting point is 01:04:13 And one of the things that's interesting is when the war is finally over, you know, God and the leadership of countries willing, Ukraine as a nation is going to be sort of Europe's drone manufacturing capital. They will be a dividend out there. They're already the world's best army. Yeah. Because they've had to be. And just as a crow, bring it back to the U.S. There's 10,000 drones a month crossing the Mexico-U.S. border. And I keep telling government people this, that drone technology beats wall technology.
Starting point is 01:04:45 What the hell are we doing? Oh, my God. There's probably a good slogan there someplace. Okay, here's a fun article. Elon Musk, Tesla might unveil a flying car. So we've been talking about this. This is the next generation roadster. God knows what it's going to be charged.
Starting point is 01:05:05 and this is not a, you know, a Model Y mass production. I remember being backstage with Elon when he had funded the Global Learning Prize and we're backstage talking about Tesla before we're going out. And talk about the guy's level of intensity, right? He's like worried he was launching. It was a Falcon 9 launch that night with a, going to the largest payloads. he was concerned about center of mass issues and vibration issues. And then he was concerned about whether Tesla could, in fact, survive the next quarter.
Starting point is 01:05:43 This is like 2017, 2018. And I said, well, are you going to put out the next roaster? He goes, man, oh man, nothing matters other than the Model Y and the Model 3. He said, that's our mass production car. And so he was always looking at that. But here we are. And I think the interesting conversation here is, will this. roadster include, you know, jet or rocket propulsion from SpaceX that will give it the ability
Starting point is 01:06:12 to hover or hop, right? That's what people believe will be materializing. Any thoughts here? I'd like to point out a historical irony, if I may. There was a lot of hand-wringing about eight years ago, circa 2017, about how we were promised flying cars and instead only got 140 characters. But if you play the tape forward, as it were, the 140 characters was Twitter. That became X. That became X-A-I. And that's now the integrated technical and capital structure that's poised to give us a flying car. So a bit of historic irony that actually maybe in some sense, the 140 characters actually gave us a flying car after all.
Starting point is 01:06:51 That's such a great connect. You know, he's gone, Elon's gone on record saying by the end of the year, there will be an unforgettable. demo. So excited to see what that look like. I mean, the current gossip is that SpaceX is going to provide cold gas thrusters, right? It's propulsion systems that might allow the car to sort of hop or hover. Are you serious? Yes. Yeah, seriously. So imagine you have like, you know, you have 30 seconds of hovering time. And over the quote until you recharge them. So I mean, interesting, you could be recharging these thrusters by just compressing air as you're driving along. Who knows? Well, I'll tell you, you know, the highways were originally designed to go 100, 120 miles an hour,
Starting point is 01:07:42 but then turned out to be way too dangerous. But if you added that cold thruster to the car and it has accident avoidance built into it, its ability to jump and hover over an accident scene is incredibly valuable. Well, I'm curious how high it can hover. I'm guessing we're talking about like a foot. I think once you get out of ground effect, you're not going to have much hovering capability. You guys have heard me talk about this before, but I just spent the last four days commuting from cities into JFK or Gorillus in Sao Paulo. This cannot arrive fast enough for my taste. And if Tesla is able to unlock this world, the way it unlocked electric cars, massive. Okay, okay, let's point out right now, these are not EVTALs. These are not,
Starting point is 01:08:28 quote-unquote, flying cars. These are, at best, you know, short-hopping, hovering cars, right? So we still have Jobi and Archer Aviation, E-Hang, and all kinds of other companies out of China that will be multi-copter electric transport vehicles, right? So Joe, I'm sorry, Archer has the contract for here in L.A. for the 2020 Olympics. So that will be fun.
Starting point is 01:08:53 But I think this is more of a fun, you know, the kid with all the toys wants a, of flying a hovering car. Well, part of the brilliance of Elon, too, is that car companies will spend something like 7% of revenue on marketing. And he spends zero, but he takes that money and does cool, cool things that are far more valuable than marketing. And so it actually is net profitable for the underlying companies to pursue these crazy
Starting point is 01:09:18 interesting science projects, some of which turn into real products and keeps the market cap high, but it also replaces the marketing budget. brilliantly. And a lot of people should be copying. Like, what can I do to be inspirational and cool and use that to drive people to my product, to my company, to my team, to my mission? Yeah, it also motivates the team, right? People want to come work at the coolest places. Helps recruiting. You get the best people. It all work. It's just, Elon has invented the formula and perfected it. I think Steve Jobs kind of invented it, but Elon has taken it to the next level, and everyone should just be studying. Whether you like Elon or not, study it clearly works.
Starting point is 01:09:56 It's the right plan. I love this, I don't know if this, I guess it was a tweet. So Elon on how to prevent global warming, quote, a large solar-powered AI satellite constellation that would be able to prevent global warming by making tiny adjustments and how much solar energy reaches the Earth. So, Celine, we've talked about this at the XPRIZE for ages. I call this a solar sunshade, being able to have something out maybe at the Lagrange point that's able to reflect a quarter of 1% of the sunlight impinging on. the earth and basically sort of a thermometer to titrate solar flux on the planet. The challenge with this is that it's a tragedy of the commons. There are going to be some countries like Russia that wants global warming because it opens up the waterways north of their country,
Starting point is 01:10:47 others that it's decimating their agriculture like Africa and parts of Europe. And no one can take action. Salim, what are your thoughts here? Yeah, there's an old story called the Pinatubo effect when Maninotubo and the Philippines erupted in the early 90s. The ash covered the whole atmosphere for a while and it dropped
Starting point is 01:11:09 global temperature by two degrees. And one of the thoughts I've been thinking over the years is there's about seven majorly threatened areas with sea levels rising, the Washington, D.C., River Delta, Bangladesh, the little country, Florida, etc. And I thought
Starting point is 01:11:25 they would actually just start launching up rockets without telling you to do something like this because they have existential threat. They're going to do it. And the cost is not that heavy. But this gives you a computable capability and you can calibrate it much more effectively. This is reversible is the most important thing, right?
Starting point is 01:11:41 That's right. That's really huge. It's geoscale engineering. Alex, you probably thought of this. And by the way, just to point that out, people complain, oh my God, you can't go around geoengineering. And the responses, we have been geoengineering by default, by throwing up all this carbon.
Starting point is 01:11:56 We have to figure out technological ways of doing it. The COP23 and all that stuff. Nation states will not solve this. Good point. The idea of a global weather grid is one of the biggest ideas I'd like to push forward, that I haven't seen either the political will or the technical will to push forward. It doesn't even need to necessarily be a sunshade per se. It could be as simple as satellites via microwave heating or some other.
Starting point is 01:12:24 mechanism, increasing the local cloud cover in some areas, reducing it in others, that could be enough to solve a broader problem than just global warming, which is weather control. Wouldn't it be wonderful if we could steer hurricanes in one direction versus another direction or mitigate storms? I think this is from a technical perspective, doubly so in the era of AI, when we can have planetary scale weather models, including more recent strong ones out of deep mind, that that can solve this problem. It's more, I think, a political problem of just deciding that we want us a planet to do it well it's it's an insurance problem too oops we steered the hurricane in the wrong direction right it'll be a social
Starting point is 01:13:04 social problem less a technical problem yes it'll be a heyday for conspiracy theorists but it's inevitable i mean a topic like that you have to have global consensus and that's always been impossible but it's inevitable that we need to get over that hurdle in the next you know few years not 20 years you know because there are many of these topics coming up concurrently. This is a really good one to force the agenda. But these things are all global. And so if there's no mechanism for global consensus, we're screwed as a world.
Starting point is 01:13:35 So we have to get over that hurdle. Yeah, well, we don't get to Kardashev type 1 civilization status without a global weather control grid. It's as simple as that. All right. Here's the next article. This is a fun one. This is Blue Origin Land's Glenn Rocket Booster for the first time. We see a video here of what we're used to seeing SpaceX with Falcon 9 do,
Starting point is 01:13:58 but this is a Blue Origin vehicle. So Blue Origin launched a mission to Mars called the Escapade. Congratulations to Jeff Bezos. And the booster touched down on the recovery ship called Jacqueline, which Jeff named after his mom. How is that? Hey, Mom, I got a gift for you. I'm naming the recovery ship after you.
Starting point is 01:14:20 so this is a big deal for me this is doubling our chance of getting humanity out into space not being overly dependent on SpaceX which is still by the way launching over 90 plus percent of the America's spacecraft and something like probably 70 percent of the world's launches right now any any thoughts on this one guys I just think it's great that we have a second capability of aside from SpaceX I think it's good for the world Jeff's been spending about a billion dollars of his Amazon stock per year to fund this. It moved a lot slower. I used to bug him about why isn't he going faster.
Starting point is 01:15:01 But, hey, he's here now, which is great. And, of course, they're going to be using Blue Origin's going to be using their own booster now to launch their competition to Starlink, which is already being deployed. Alex, anything you want to add here? Yeah, I think having multiple reusable railroads, if you will, to orbit is, I think, exactly the sort of space race we want to find ourselves in. And I think if we're going to colonize and develop the solar system, we're going to need multiple routes to orbit. Yeah. It was nice to see Elon congratulate, Jeff, on this. Of course, Starship puts all of these other launch vehicles to shame. You know, and Elon very famously said once Starship is up and operating, he'll shut down the Falcon 9 line.
Starting point is 01:15:49 And it will outcompete Blurgeon and Rocket Labs and everything else. It'll be the big sucking sound. Can I double down just on that just for a second? Sure. I think it's so awesome that he tweeted the congrats because it just shows that they're all focused on the bigger picture. This is not about competition. This is about solving the problem. I think that's just fantastic.
Starting point is 01:16:09 Yeah, agreed. All right, talking about the opposite end of the spectrum. This just made me mad. But, you know, it's the conversation we had earlier coming out at FII. People are concerned. So labor unions in Boston are fighting Waymo. So the Boston unions formed a labor united against Waymo. And the approach here is they're going to force Waymo to put a human safety driver in the right seat or in the car in the left seat, guys knows.
Starting point is 01:16:42 We've seen this before, right? When France made Uber illegal, lots of places we're fighting to retain, you know, these unions in place. Dave and Alex, you live in Boston. How do you feel about this? Well, it's a little disconcerting that our tech hubs, you know, the best and biggest tech hubs in the country are also the most dysfunctional governmentally in terms of like this, this is utterly insane, right? And it's obvious to anyone involved in it, but the populist uprisings are going to be all over the place on all kinds of topics. We've seen the picketers outside the front of Open AI, you know. And so this is going to happen all over the place. But if the governments of those regions don't get on top of it and put some kind of rational system together, then people are just going to leave. You know, like Waymo will go elsewhere, and it is already going elsewhere.
Starting point is 01:17:34 And that's just going to be really bad for our Silicon Valley and for Boston and for New York. You know, it's just they've got to figure it out. One of the things that keeps me up at night, as it were, is this sort of regressionist approach where people, unions, organizations that are worried about employment, fight the advance of technology that will save lives, increase economic wealth, and just make quality of life radically better. And so I think one of the, this is almost a meta technology that we need to develop is a way to maintain social cohesion while at the same time radically accelerating technologies we haven't cracked that yet maybe maybe social cohesion tech needs its own benchmark and we solve that
Starting point is 01:18:21 there's there's almost an optimal trajectory where we get our acceleration and our social cohesion at the same time but we haven't cracked the social cohesion part of that and i'd love to solve that sure i mean we can talk about that at the summit if it comes together but here's the deal I mean, people are worried for their jobs. Number one, I mean, that's it. It's survival. I need to feed my kids. I need to be able to afford my home.
Starting point is 01:18:45 And this is going to take it away from me. How can you possibly do that? And until we up-level our capability to provide people that safety net, whether it's universal basic income, which I'm much more interested in universal basic services, right? Anyway, Salim, you were going to say? Yeah, I want a grandstand just for. a little bit. One of the things we noticed after the EXO book came out was you're going to see this massive letterville type stuff against new technology, right? Because people will much rather
Starting point is 01:19:17 be comfortable than happy. And we actually focused on this. So we solved this, what I call the immune system problem. We created a 10-week engagement with big companies that solves, cracks this in big companies. We've done it 100 times. We even have a nonprofit that does this in the public sector where you need to change, where regulatory and this type of construct is the immune system. Take 16 weeks, but it works. We've done it a bunch of times. Or anybody facing this, just give us a call. We'll show you how to do it. We found a way of hacking cultural. Where do they? Where do they reach? Just ping me at salim at open eXO.com and we'll show you how to do it. We've open source the methodology to doing it because, you know, a few years ago when the book came out,
Starting point is 01:19:56 with all of this technology, if we don't solve the cultural resistance to this, it doesn't matter what the breakthroughs are. We're going to be fighting this political problem. And the next level we were going after is how do you solve the immune system problem in an institution like health care, journalism, or education? They each have their unique immune systems. I mean, we're going to see, we're going to see this across every industry as white collar AI, you know, superintelligence comes in, human rights robots come in. This is just a small peak at what's going to be coming. We've got to solve it now. All right, let's go into our final segment here, which for me is one of the more important and exciting ones, which is what's going on in the world of science. And I'm going to
Starting point is 01:20:38 start this conversation with a video clip from Salm Altman on his thoughts about GPT6 and the science leap that's coming. If GPT3 was like the first moment where you saw like a glimmer of something that felt like the spiritual turning test getting past, GPT5 is the first moment where you see a glimmer of AI doing new science. It's like very tiny things, but here and there, someone's posting like, oh, it figured this thing out or, oh, it came with this no idea. Oh, it was like a useful collaborator on this paper. And there is a chance that GPT6 will be a GPT3 to 4 like leap that happened for kind
Starting point is 01:21:17 of Turing test like stuff for science, where five has these tiny glimmers and six can really do it. All right. Alex, let's open up with you. Yeah, I've gone on. record is saying I think we're going to see many, if not most, grand challenges in math, science, engineering, and medicine start to fall to AI over the next three maximum years. So I think this is very much on my anticipated trajectory.
Starting point is 01:21:43 Science is going to get solved and all of its disciplines are going to get solved and AI is going to do it. And I for one am super excited about finding myself in a future, near-term Star Trek type future where it turns out that centuries of human capital or the equivalent of centuries of human capital just get solved overnight at bulk at scale by AI. Yeah. Love it. I think the amount of patents being filed, the number Nobel Prize winning science being done is going to skyrocket. You can actually see there's an interesting chart I saw where if you look at patent filings post-chat GPT,
Starting point is 01:22:26 there's just exponential growth immediately thereafter, where it's an aid to humans, but all of a sudden, if it's autonomously doing the science in sort of closed-loop cycles, it's amazing. Dave? Yeah, those are two things really worth tracking. The AI-generated patents and also the agent-to-agent transactions,
Starting point is 01:22:45 part of which are licensing the patents, but that whole agent-to-agent intellectual exchange world is starting to really take off and you can just track it by transaction count and see the shape of the exponent that'll be something we'll track really closely slim this justifies why I didn't pay attention during my physics degree I was good for yeah oh my god well look if history if history is consistent GPT6 and Gemini three will be about the same you know they're they're just leapfrogging each other and and Gemini three from Google is within a week we think. Yeah. So we have to carve out probably, you know, a big chunk, maybe a full day just
Starting point is 01:23:25 studying its capabilities. And we will. And we will. When GPT3 comes, I'm sorry, when, when Gemini three comes out, expect us to go live with an analysis of it as soon thereafter as possible. Seeing a lot, right, we just saw Open AI chat, or GPT 5.1's come out. Miramaradi's company has just gone from like nine or 10 billion valuation, a 50 billion valuation. There's a lot frothing right now. All right, let's move on to the next one. Zuckerberg and Chan bet AI can cure all diseases. Zuckerberg believes AI could make cures much sooner while empowering scientists to take risks. So the Chan Zuckerberg initiative to boost compute 10x by 2028, shifting all science work under their biohub brand. So this is great. I love that. I mean, they've had an interest in
Starting point is 01:24:22 medicine and biology for some time, but now they're doubling down and focusing. Alex, let's go to you first. Yeah, so you'll remember when CZI launched in 2016, the goal was to cure all disease or most disease by the end of the 21st century. And now the messaging has radically changed. Now the messaging is we're going to have generative AI-based virtual cells and presumably virtual organs and virtual organisms built on top of those, enabling AIs to search intervention space for cures to all disease. So now I think the subtext is you don't have to wait until necessarily the end of the 21st century to cure all disease.
Starting point is 01:25:01 This could happen in the next five years, call it 2030. And I think all of the timelines, not just CZI, but other nonprofits that are working on AI for broad-spectrum sort of generalist cure-all diseases have similar timelines. You see similar messaging out of Anthropic as well. 2030 cure-all disease with AI. Yeah, we saw that from Demosis Abbas within a decade cure-all disease, right? So we're seeing a huge amount of talent, compute, and capital going towards that goal, which is good news for everybody.
Starting point is 01:25:34 What I loved about this is that to Alex's inner loop point, that, instead, Instead of working on specific cures, they're just working, folks, and just generate more compute and make it available to everybody. I think it's great. Yeah, sometimes it's, in my experience, sometimes it's easier to solve the more general problem than the more specific problem. It may perversely end up being the case. It's easier to just cure all diseases with AI than to cure artisanally diseases one by one.
Starting point is 01:26:00 Yeah. I mean, that's the concept around age reversal. If you didn't have the disease when you were in your 20s and 30s, but it develops when your 40s or 50s, how do you turn back your epigenetic clock so that your cells are younger and thereby not expressing the disease since you didn't express it in an earlier state of your of your biology? All right. Next one in this area, and this is a conversation about one of the first real, what people are calling longevity therapies. So the U.S. government has slashes a price on GLP1 drugs, and we're finding that GLP1 drugs are lowering the risk of repeat strokes.
Starting point is 01:26:42 So, you know, one of the challenges has been GLP1 drugs have been expensive, and they are sort of a go-to for most physicians if someone has a particular, especially obesity-related or, you know, diet-related issues. Here we see Trumprx.gov looking at bringing this down to $149 per month, which would be pretty amazing. We also find that the GLP1 drugs in particular are able to cut the incidence of strokes by as much as a half in a three-month period of time. Who wants to jump in? I'll comment on this one. I mean, it's so exciting. I guess the elephant in the room is the outstanding question in biology. Why are GLP1 class drugs so seemingly miraculous? Why are they able to treat so many different forms of biological dysfunction, not just the metabolic issues
Starting point is 01:27:45 that they were originally intended for? Putting the question of biology and mechanism aside, I think when we talk about universal basic services and abundance of healthcare, I think this is the beginnings of that. I think offering for 150 odd dollars per month to U.S. persons who need it, GLP1 class drugs, that starts to look like universally, basically abundant health span drugs. And I think it's a major step in the right direction. I want to put out the warning again just because I'm in this world, right? GLP1 drugs are not a panacea. If you are obese and using these drugs, it's important to use them as a means to change the way you eat, to change your diet, change your habits. Because if you stop the drug during this period of time, you're losing weight,
Starting point is 01:28:39 you're also losing muscle. And you need to be exercising throughout this process. And if you stop taking the GLP, what happens is you gain the fat back, but you don't gain the muscle back. And that's a problem. Sarcopenia is a true issue as we're getting old. your muscle is your longevity organ super important to have that that realization make sure you keep exercising you know muscle building while you're using a gLP one i just love though all the side effect benefits we're seeing without even realizing it i think that's so great edison launches cosmos the ai scientist so uh this is this seemed like a really a big deal to me alics do you want to walk us through it?
Starting point is 01:29:26 Sure. So this is another scaffolding-based approach to agentic science. This came out of Edison, as mentioned. And I think this is almost a preview of the age we're about to find ourselves in. Maybe we're a few months in at this point of bulk discovery. It'll look a little bit like, if folks remember, AlphaFold 3, where essentially almost overnight, a large chunk of structural biology was more or less solved. it's going to look a little bit like that except much, much broader where with this particular
Starting point is 01:29:58 agentic AI researcher, there were discoveries across a number of different subfields of biology, not just structural biology. So we'll see, as was published in this paper, discoveries relating to potentially helpful for Alzheimer's, some other factors. But the core technical advance here that is claimed is effective increases in context length, That's the key. So the frontier models right now have context lengths in the millions, usually, of tokens. But if you wanted to develop the world's strongest AI scientist, ideally, naively, you'd want a model that
Starting point is 01:30:37 has a context length in maybe the trillions of tokens so that you could, in principle, feed it the entire internet and every paper ever published, and then just ask it, what's the solution? What's the solution to Alzheimer's? So the approach that Edison adopted here was a little bit more practical than some sort of algorithmic advance that advanced the context window to trillions of tokens from millions and was focused on more knowledge graphs and other scaffolding techniques to achieve effective context lengths that are much larger. But the end result is still essentially the same. You put as much information, as much scientific literature into the context window as you practically can. And then you crank it and you ask for discoveries and discoveries and innovations pop out. And I think one can imagine near-term future where we can just scale our way, scaling law style, to major discoveries across all of the important biological subfields.
Starting point is 01:31:34 Here's one of the metrics they threw out. It completes four to six months of expert human research in 12 hours and can read 1,500 papers and run 42,000 lines of code per experiment. Yeah, it's it's it's it's it's it's almost brute force like research would you say that? Of course, but it's also what happened when you take all the millions of research papers that have been written in the past where people missed findings and not running through? When we find incredible things. Yeah, not just the papers, but the raw test results that are in like digital form. Yeah. Just just the incredible amount of information this can assimilate because when I look at my biology friends talking to them last night actually and they're all like well, you know, these things. always take a lot longer than you think. Like, how do you get so cynical at a young age?
Starting point is 01:32:20 This is a completely new approach. It's a brand new Greenfield territory. And if I look at what they actually have been doing for the last three years, they try and tease apart a single chemical reaction or a single test, and then they run it through MATLAB or Mathematica to try and tease apart. And then they draw these plots that say, well, you know, we have statistical P tests here that have significance just barely. And it's like, what a waste of time, man. All these are interacting. And if you take the neural net approach and you just bombard it with raw information, it's really good in these multiple things going on concurrently. Try to find the conclusion without having to tease apart every single element. It's a brand new way to do things
Starting point is 01:33:03 and it could do anything. It could be mind-blowingly capable and quick. You don't know because it's a new thing in the world. So put all that cynicism behind you and think about like the rate of improvement that might be possible and just embrace it. Yeah. All right. Our final article here. One for fun conversation. Genetically engineered babies are banned, but tech titans are making one anyway.
Starting point is 01:33:28 All right. So, you know, this is worth the conversation. So there's a few companies that are being now funded that are building CRISPR capability for embryo editing. So Wall Street Journal reported, one out of San Francisco called Preventive. That's backed with Sam. I mean, Sam is backing an incredible number of companies, right? So, CRISPR company, brain computer interface company, and amongst probably dozens of others.
Starting point is 01:33:59 There's another company called Manhattan Genomics that was just covered in Wired. And so this, in my mind, it's a regime change, right? We're going from selection to alteration. And so IVF clinics already allow you to screen your embryo, right? You can fertilize number embryos and then you can do single cell sequencing and find out which of those embryos are safest to implant. But you can't edit your embryo. That is verbodin under FDA rules. The FDA is blocking any of that.
Starting point is 01:34:35 They won't even support any research in that area, let alone allow it to be done commercially. So these companies are beginning to look outside the U.S., where can they go and do it? And there's some conversation that this is happening in or will be happening in the UAE. What do you guys think? I remember Raymond McCauley getting up and saying, look, the human genome is essentially software, and we have, you know, 50 trillion cells in the human body. Essentially the human being, it's now a software engineering problem. And when you can edit the embryo, you're basically starting it from scratch.
Starting point is 01:35:09 It seems inevitable to me. and again, again, the normal thing. The question is, what do you want to design for is the big question. Yeah. I mean, we give our babies the best we can, right? It's like you start genetic engineering when you pick your spouse, right? Do they, are they successful, are they intelligent, do they look good? And that's the first step that you take.
Starting point is 01:35:31 This is, can I just go back? This is the old, a really important point. We used to talk about the shift from film photography to digital photography, and all the implications of that. Essentially, we've gone from breeding and genetic evolution to a digital model, which just accelerates the whole thing. And then when the baby's born, just one other thing, you know, you give it the best health care you can, the best education you can, the best clothing you can. You're giving your child the best you possibly can. So the question is, why not start with the best genetic stack?
Starting point is 01:36:04 and it's you know the fear is the whole eugenics conversation Alex over to you a couple comments the first one of the elephants in the room the movie Gattaca arguably one of the best cinematic depictions of germline editing of human babies or at least germline selection I should say I have to watch that one again yeah it's it's an amazing movie many view it as a dystopian future I think if you look at it the right way it's arguably a more utopian future in the sense
Starting point is 01:36:34 that we get space colonization. There's a SpaceX-type movie that is named Gattaca in the movie, and also we get healthy babies. But the second point is more historical that the Asilomar guidelines that were arguably the inception of many of these bands, soft and hard, against germline editing, those are in 1975. There's a historic argument that Asilomar was actually triggered by the Watergate scandal. Really? Yeah. There was a concern at the time. Some historians argue that the Silamar guidelines were originally proposed or at least motivated
Starting point is 01:37:14 in part because Watergate was fresh in everyone's mind and there was a concern by scientists that if there was recombinant DNA experimentation that was not very well advertised or not forthright, according to some sort of public guidelines, that something bad would happen to the scientific community in whatever form. This is one historic argument that Watergate helped to precipitate 1975 Assyllamar guidelines. I remember I was in grad school. I was doing my joint MIT med school degree at the Whitehead Institute on recombinant DNA, right? The first restriction enzymes had come out that allowed you to edit DNA in a somewhat precise fashion,
Starting point is 01:37:59 nothing compared to what we have now with CRISPR. and the headlines of the magazines that's the you know the cover stories were designer babies and there was so much fear and that was god knows 40 years ago um we're 50 years on from a silumar yeah 50 years this year on from a silumar and i actually had to to go back after i saw the story and look to check to see is there even at least in the u.s a single federal statute that bans germline editing and i couldn't find one which is a little bit of a little bit of bit surprising. There's there's a patchwork of federal and state laws and regs that certainly deter germline editing, but not a single one that actually bans it. So I wouldn't be surprised if in the near term
Starting point is 01:38:41 future, we find a generational conversation about whether germline editing should in fact be allowed. And we have to remember, right, in 2018, there was a Chinese scientist, he, Gianke, who did this kind of CRISPR-Editing. He was trying to target the CCR-5 cell sort. surface receptor that would prevent a child from getting HIV. And the guy was just decimated in China, in the press, condemned by the world. Yeah. Yeah. They arrested him. So they put a kibosh on this idea. Seriously, you know. Well, these are really complicated topics and they all need thought leaders. But the number, like if you're just in this podcast alone between fusion and AI breakthroughs and driverless cars in Boston and now this, they all.
Starting point is 01:39:31 all need thought leaders so that the number of people that need to rise to the occasion and say, here's what we should do. Here's an idea. You know, ethical, trustworthy people who know what they're talking about, the need for that that is backlogged so deep now. And this is just one of those topic. I'll put it here to Hank Greeley, who's been working on bioethics for a very long time. You know, the long-term effects of this are really, really consequential. One of my biggest concerns is that Hollywood is decimating our future. Right? I mean, this is my pent rant. You know, every movie out there is dystopian genetics, dystopian killer robots and AI systems.
Starting point is 01:40:10 I mean, no wonder that people fear the future if, in fact, the only futures they see in in TV series and movies are, you know, one that they don't want for themselves and their kids. How can, you know, all you see this and you immediately go to this negative vision of the future, which is pervasive in society. We need to retool that. We need to reset that. We need more Star Trek in her lives or the following version of it. Maybe, Peter, you're giving a call for action to yourself to start a new Hollywood studio, this time powered by AI that paints a much more optimistic future. Yeah. Be the change agent.
Starting point is 01:40:47 There is, there is, well, I can't say much about it yet, but there is a project in works that I'll unveil. The fundamental problem is human nature so fear-based per my earlier comment today. so you're fighting against that. The way to solve for this is to give those embryos psychedelics and solve this right now. That's a way to solve it. We're going to reduce the size of their amygdala's. So they reduce their...
Starting point is 01:41:12 Yeah, I'm serious. You edit it out. You don't need an amygdala on this world. You know, it's funny about this whole storyline is that the birth rate in Korea now is 0.7 per couple, 0.7 children per couple. Crazy. Like, one topic is editing your baby. The other one is like, well, no one's having any baby.
Starting point is 01:41:30 these at all. So doesn't that seem like a more urgent issue? A harder problem. Hearder a bit problem. Oh, my God. All right. So we're going to wrap this episode. I'll remind folks, if you're interested in the idea of the moonshot mates pulling together
Starting point is 01:41:46 a couple of day, amazing event in the fall of next year, send us an email. Moonshots at deamandis.com. Let us know you're interested. And we're going to try and get to a thousand people interested. if we can get there, then we'll pull the trigger and we'll make that happen. Salim, you have something you wanted to mention? We have our next 10X shift workshop happening on November 19th. So we'll be talking about immune systems and organizations and giving people directions on how to build the XO.
Starting point is 01:42:19 So come join for that. It's 100 bucks. People love it. It's limited seats. So come and join. Amazing. Dave and Alex, any closing thoughts today before I play. our outro music today from Adam 822, which is amazing.
Starting point is 01:42:33 I love the fact that our subscribers are sending us, you know, they're musical. They're so good as well. They're so good. I have a closing thought, which is, you know, we're closing out our best venture fund year by far. I mean, just crazy what's happened this year. You know, portfolio gained about $12 billion of value. The company's within the year. Most of those ideas come from Alex.
Starting point is 01:42:59 So I wanted to throw a shout out to Alex for the vision. But, you know, one of the themes in this podcast has been that all these topics, you know, if you look at what Elon's been able to do, like should we have a satellite at the Lagrange point, you know, shadowing the Earth by 0.01% or something like that, if your track record of being right is perfect or very, very good, then you actually get to say, here's the answer, guys, and people will flock to it just because your track record is right. And I think Alex is right on that cusp now of just, and that's why it's a little, cautious on the podcast sometimes too because he doesn't know he actually says he doesn't know unlike me i just say something anyway but but Alex is uh is you know really really um his vision for what will
Starting point is 01:43:40 and won't work is just becoming so honed and so so beautiful so i really wanted to thank you for the gangbuster year we've had yeah thanks Alex Alex a close and very kind yeah um in addition to thank you dave for for the very kind comments i'll just say i spend substantially all of my time thinking about how we solve the hardest problems on earth with AI. So if folks listening are interested in connecting to talk about problems they have or the hardest problems on Earth they want to see solve with AI, definitely feel free to reach out. Amazing. All right, guys. This is a label. Unbelievable episode. I'm going to have to listen to this at least three times. I know. I do too. And we listen, we read your comments. So first off, please subscribe, tell your friends. One of the things I was
Starting point is 01:44:28 so heart-filled about when I was in Mexico City, when I was in Italy, when I was in Spain, when Salim was in Brazil, were all of our fans there telling folks that they share the episode. So thank you for listening. We do love the comments. If you have questions, I really want to do some A&A episodes with our subscribers, so drop them in the comments. Our team will aggregate them. And this is a piece of music labeled, All right, folks, from the Moonshot. to math. The WTF just happened crew by Adam 822. All right, let's enjoy. Trying to pin down that reason why Dave's bouncing like a kid with toys
Starting point is 01:45:37 Fill in the mic with star of noise It's the moonshot's crew changing the game X-Priestreamers know their name from A-I to Rockets and maps unsolved They're building futures deeply involved Yeah Peter shouts amazing That's our Q that's the WTF just happened and that is amazing
Starting point is 01:46:06 That was amazing, unbelievable If you've got a piece of outro music, send it over to us And if it's amazing as that We'll go ahead and play it Gentlemen, Happy Saturday And wishing you guys
Starting point is 01:46:23 What's left of it, I mean, Jesus Listen, I was up at I was up at 3.30 this morning prepping for this episode. Just so much to cover. And we still have another episode. We need to essentially make this a full-time thing because the world is happening so fast that we just can... I mean, just trying to process this episode
Starting point is 01:46:42 is going to take hours and hours out. All right, guys. Love my time with you with each other. Be well. Every week, my team and I study the top 10 technology metatrends that will transform into... over the decade ahead. I cover trends ranging from humanoid robotics, AGI, and quantum computing to transport, energy, longevity, and more. There's no fluff. Only the most important stuff that
Starting point is 01:47:07 matters, that impacts our lives, our companies, and our careers. If you want me to share these meta trends with you, I writing a newsletter twice a week, sending it out as a short two-minute read via email. And if you want to discover the most important meta-trends 10 years before anyone else, this reports for you. Readers include founders and CEOs from the world's most disruptions. companies and entrepreneurs building the world's most disruptive tech. It's not for you if you don't want to be informed about what's coming, why it matters, and how you can benefit from it. To subscribe for free, go to Demandis.com slash Metatrends to gain access to the trends 10 years before anyone else. All right, now back to this episode.
Starting point is 01:47:56 Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.