Moonshots with Peter Diamandis - Claude Opus 4.5, White House "Genesis Mission" & Amazon's $50B AI Push w/ Emad Mostaque, Salim Ismail, Dave Blundin & Alexander Wissner-Gross | EP #211

Episode Date: November 26, 2025

If you want us to build a MOONSHOT Summit, email my team: moonshots@diamandis.com  Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends   Emad Mostaque ...is the founder of Intelligent Internet ( https://www.ii.inc )  Read Emad’s Book: https://thelasteconomy.com  Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding      Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy   Grab dinner with MOONSHOT listeners: https://moonshots.dnnr.io/ _ Connect with Peter: X Instagram Connect with Emad:  Read Emad’s Book X  Learn about Intelligent Internet   Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO  Connect with Alex Website LinkedIn X Email Listen to MOONSHOTS: Apple YouTube – *Recorded on November 25th, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 I've compared this moment to 1939. This is the Manhattan Project. Similar to the Apollo project that put a man on the moon in 1969. This is an all-in national effort to take the power of AI to use the world's largest supercomputers to advance innovation and science. This is just extraordinary. I think this could be the greatest accelerator for human knowledge in the US yet if it's properly funded and executed. Genesis 1 is, you know, God created the heavens and the earth.
Starting point is 00:00:32 Finally, we have the tools to actually be able to understand them properly. Application of large amounts of compute with the reagents allows us to unravel the mysteries of the Earth and the universe. You know, if we went back to the beginning of the year, did we predict that we'd be here, or is it moving faster than even the few of us could predict? Now that's a moonshot, ladies and gentlemen. Hey, guys. Welcome to our emergency pod for Thanksgiving week. A lot going on here. Genesis mission. We're going to be talking about that. We'll talk about Anthropics, U-Claude 4.5. I'm here with AWG, Mr. EXO, and thank you, Imod, for joining us. I know this is Thanksgiving for you in England as well, isn't it?
Starting point is 00:01:20 It's Thanksgiving for everyone. Ah, yes, for sure. So I wanted to start with a question, which is, what does Thanksgiving like in the year 2035. I'm curious. Is it going to change it all? Saleem, you want to kick it off? Is 2035 far enough a way to make a difference? It's a hell of a difference.
Starting point is 00:01:39 I mean, look, by that point, we should have the cost of Thanksgiving dinner dropping by 10x. It should be personalized to you for your nutrition so that depending on your metabolism, the turkey or ham or whatever the heck it is, is totally customized. to the ability. I'll have a little device inside me saying, whoa, whoa, whoa, before you eat that turkey, I'm still metabolizing the cauliflower. Give me three minutes, please.
Starting point is 00:02:08 Take a sip, do not drink alcohol quite yet. Da-da-da-da. And I think we'll have, we should have gotten to the point over this hump of kind of expensive energy that we have ultra-cheap energy, ultra-cheap food, and we're crossing right into the Alex Rubicon. All right, all right.
Starting point is 00:02:26 My addition is we'll have Tesla bots serving us everything. How about you, Alex? Yeah, I think if we're not celebrating, at least some subset of humanity is not celebrating Thanksgiving on Mars. Some subset is celebrating Thanksgiving in the cloud in the form of uploaded humans. And maybe we have some uplifted non-human animals also celebrating with us at the table, then something's gone terribly wrong over the next day. Wait, so I get this. So we're going to have uplifted turkeys arguing with their lawyers to keep a ceasefire against killing them all.
Starting point is 00:03:04 Yeah, if that doesn't happen, then something's gone wrong over these 10 years. Imai, how about you? What are you going to see in 10 years' time? Yeah, I mean, 10 years is the pessimistic end of the AGI forecast, right? So assuming that humans don't end up like turkeys, where we get happier, happier, and then AGI, it goes straight down. Well, we'll figure out how to do perfectly moist turkey by then. But then, as Alex has said, mathematics should be solved by then science, et cetera. So you're in the post-abundance world, hopefully, with the robots and more.
Starting point is 00:03:35 And there should be a lot to be thankful about if we can navigate what's coming. Yeah, if we can navigate what's coming. Okay, well, we're going to talk about that. But before we do, I want to jump into our first story, which is a doozy. Let's hear and learn about the Genesis mission coming out of the White House. A very powerful concept. All right, let's dive into this with a video. In every age, humanity invents new ways to see further.
Starting point is 00:04:09 The telescope let us glimpse the stars. The microscope revealed the worlds within us. For centuries, thinkers like Leibniz, Shannon, and Turin, dreamed of making all knowledge computable. But today, knowledge grows faster than our ability to understand it. Trillions of data points, a universe of information still unconnected. Now, a new instrument emerges,
Starting point is 00:04:42 one capable not only of observing the universe, but of understanding it. Genesis' mission will transform how science is done in America. Uniting our brightest minds, most powerful computers, and vast scientific data into one living system for discovery. Built on artificial intelligence and quantum computing, it will radically redefine the scale, speed, and purpose of scientific progress in America.
Starting point is 00:05:11 This is the work that will define our generation's legacy. A new revolution begins, one guided not by competition alone, but by curiosity. but by curiosity, imagination, and the belief that discovery is the truest form of progress. Wow, just wow. What an incredible story coming out of the White House. Again, the title here, U.S. government launches Genesis mission, transforming science through AI computing. This is Trump's executive order to use massive federal scientific data sets to train powerful AI models.
Starting point is 00:05:51 Department of Energy will connect U.S. supercomputers and lab data into one unified platform intended to shrink the research timeline from years to days through AI-driven experimentation, focusing on biotech fusion quantum. It's a big deal. AWG, you want to kick us off? Yeah, I've compared this moment to 1939, and this is the Manhattan Project. And in the Manhattan project, as I've remarked previously, we turned the country into one big factory for, For nuclear weapons, in the case of the Manhattan Project, in this case, the country is being turned into one big AI factory. And this is an incredibly ambitious. We speak of moonshots.
Starting point is 00:06:31 This is an incredibly ambitious moonshot, not just to turn the country into an AI compute factory, but also to supply some of the limiting reagents, as it were, like data sets, federal data sets that are locked up in a variety of different enclaves are now, according to the XO, going to be on. unlocked and made available for pre-training, probably software tools that right now are unavailable, being made available. And I think to the extent that there may be race dynamic with China whose government is also collecting large amounts of data, I think the Manhattan Project positioning is probably pretty intentional. And I think it's just glorious to see the sort of ambition, ambitious unlocking of scarce resources. I'll also point out Dario Gill, who's been named as the mission director for Genesis Mission.
Starting point is 00:07:23 I worked with him as an undergrad at MIT, and it's really great to see MIT in general and that level of scientific influence positioning in, again, a 1939 moment, such an ambitious initiative. Yeah, I should just mention, by the way, our other mate, Dave, is on a research mission in Italy this week. Let's leave it at that. We miss you, Dave. Wish we were here. I mean, this is just extraordinary. I think this could be the greatest accelerator for human knowledge in the U.S. yet if it's properly funded and executed. Imad, is this something that every country is going to have to follow through and do a similar move? I think that you're seeing this.
Starting point is 00:08:06 In the UK, we had something similar with DSIT on a much smaller scale and new regulation, acceleration for nuclear reactors, etc. And I think fundamentally, like Genesis 1 is, you know, God created the heavens and the earth. And now, finally, we have the tools to actually be able to understand them properly. That's what this is really talking about. Application of large amounts of compute with the reagents, say WG said, allows us to unravel the mysteries of the earth and the universe. And so obviously that's a massive advantage. But I don't think it's any kind of coincidence that it's the Department of Energy that's running this. because we've talked about how energy is so important and the U.S. has been falling behind on energy
Starting point is 00:08:47 compared to countries like China and more. And you'll see more and more deregulation, more and more fusion, solar, et cetera, play into this. And the impact, again, can be immense if you can figure out any one of these things. And I think that we're in a good place to figure out almost all of them, again, if it's done properly. Salim, your thoughts, buddy. So I think this is where you see the best of government. because they can leverage those global data sets in a powerful way. And so when you can do that, I think it brings up the best of what government is able to do unlike private sector.
Starting point is 00:09:23 And so I think that's one really great point about this. The second, I think, is that this is kind of catch up in a sense because lots of countries use their federal data sets in different ways. China, France has been doing it for years, et cetera. So this is catch up in one sense. But taking the data, which is now a sovereign resource, and then applying it with all the AI capability the U.S. already has, I think really amplifies a huge outcome. So the potential here is kind of incredible. It reminds me back in the days of when Silicon Valley started, they created secret labs at MIT, Harvard, and Stanford to figure out how radar could be blocked and came up with aluminum tinfoil, throw it off, chaff, thrown out. the planes, and they had to create this global initiative or the countrywide initiative to protect and solve for World War II. And this is kind of like that initiative. I think this
Starting point is 00:10:20 is that big. I love it. I mean, this basically, in my mind, reframes basic science as a compute problem. Yes. And throwing everything we have at it. Yeah, I think that's the elephant in the room. And I'll also point out the Department of Energy is also clarified that one of the goals of Genesis mission is to double scientific, American scientific productivity in the next decades. When we speak of Thanksgiving 2035, I would say if we haven't 10xed or 100xed scientific productivity by Thanksgiving 2035, also something has gone wrong. But I think having a two-xing of productivity is an excellent baseline here. Every week, my team and I study the top 10 technology metatrends that will transform industries over the decade ahead. I cover trends ranging
Starting point is 00:11:05 from humanoid robotics, AGI, and quantum computing to transport, energy, longevity, and more, there's no fluff, only the most important stuff that matters, that impacts our lives, our companies, and our careers. If you want me to share these metatrends with you, I writing a newsletter twice a week, sending it out as a short two-minute read via email. And if you want to discover the most important meta-trends 10 years before anyone else, this reports for you. Readers include founders and CEOs from the world's most disruptive companies and entrepreneurs
Starting point is 00:11:34 building the world's most disruptive tech. It's not for you if you don't want to be informed about what's coming, why it matters, and how you can benefit from it. To subscribe for free, go to Demandis.com slash metatrends to gain access to the trends 10 years before anyone else. All right, now back to this episode.
Starting point is 00:11:52 I'm reminded of a, we had a very senior guy from the DoD at Singularity one year, Peter. Yeah. And during the Q&A, he goes, this is all great. You guys all love these exponentials. VCs are all hovering at the day. knee of the curve trying to catch a technology that goes vertical, you forget who funds the
Starting point is 00:12:09 arbitrarily long flat part of the curve, which is government. And this allows government to really accelerate that part of the curve. So I think we'll see exponentials moving forward in time in a pretty amazing way. Yeah. I mean, I do, please go ahead. Yeah, but I think it's interesting how it's moved from the NSF, and again, the classical grantmaking that's been disrupted over last year now, to a much more of a techno-optimistic approach. And I think one of the key things that will determine the success of this is, is this Manhattan project style closed and private? Because obviously, even though they've announced it, it could be, or is it open?
Starting point is 00:12:45 If it's open science, I think it'll be truly exponential. But if it's actually building up public-private partnerships with strong IP protections and doing a lot of stuff in private, I think it'll have a much lower impact on the other side. Yeah. I'm excited to just watch this, right? This, for me, and you said it, you said it perfect, Alex. It's a moonshot.
Starting point is 00:13:07 It's an extraordinary nationwide moonshot. It's nothing less than... If not a shot at the moon, as I sometimes say. And we'll talk about that. I mean, it's, it's, the only difference is it hasn't set an objective mission, like, you know, get to the moon and back within before the end of the decade. But this is America throwing its might and, and, and, quote, coordination at a massive opportunity.
Starting point is 00:13:34 Well, and look at the areas, right, biotech fusion and quantum. I mean, those are all moonshot domains that totally rewrite the rules of life. I would maybe just add, as I pointed out in the past, I think the next big thing after solving superintelligence, which arguably has either already been solved or is eminently solvable, is solving math, science, engineering, medicine. This is what that looks like. at grand scale. This is taking federal resources and applying them singularly to solving grand challenges. All right. Spectacular. All right. Let's go on to our second big story of this
Starting point is 00:14:14 particular week. It's what's going on with the hyperscalers, but in particular Anthropic, nice to see Anthropic making some moves. Here is the story Anthropic releases Claude Opus 4.5, which uses 76% viewer token. to reach the same results as older models, outscored the entire engineering team, and leads to seven of eight programming languages on industry coding benchmarks, improves multi-agent support by 15%. Alex, want to kick us off? How significant is Opus 4.5? Yeah, we're nearing, if not already, at the point of recursive self-improvement. Finally, the point of recursive self-improvement, many would say is the point at which more compute, more infrastructure is being allocated
Starting point is 00:15:02 by Frontier Labs to AI researchers than to human researchers. And I think that the most important indicator isn't that the benchmarks, the evals are going up into the right, although they are, and it's wonderful, and I love benchmarks. It's that Anthropic has also announced that, as you alluded, that incoming employees to Anthropic, in particular on the performance team, are now being outperformed on key tests, key homework assignments by the AI. I think that that is, that's the canary that we're imminently, if not already, given that this model was arguably pre-trained based on the data cutoff date several months ago. We're now entering the moment of recursive self-improvement. I think that's
Starting point is 00:15:49 the bigger thing. Smaller headline is, I of course have my e-vails. whenever these co-gen models come out. One of my other non-cyberpunk FPS evals is asking it to see if I can one-shot a Mario-style side-scroller, and it did a beautiful job. Amazing. And Dario's been talking about, you know, being able to get to 100% or 90% of all the coding being done. So this is a big move in that direction.
Starting point is 00:16:16 Imad, love your thoughts here. Yeah, I mean, I think from our tests, like we made, we got top of the SWE Bench Pro benchmark, which is scale. AI's one, and it was really difficult with 45%. This is with intelligent internet, right? Yeah, that's with intelligent internet framework using a combination of the other models. This model without reasoning scored 52% without even reasoning tokens, which I think was the most shocking thing for me.
Starting point is 00:16:43 So usually, like, the big breakthroughs we've had is the models can think longer, they can check, et cetera. We didn't think it would be that way with just the straight output, and the quality of code it outputs is actually just really, really good. which is going to be very interesting because the average codebase is 100 to 200,000 tokens, and this should be able to one-shot most codebases by next year. And the cost has dropped 67% from the previous version. So it's now $25 per million tokens as well.
Starting point is 00:17:12 So coding and tokens will be ubiquitous, and it may not be that reasoning tokens are what are needed for those tokens, which was, again, completely shocking to me that it would score higher non-reasoning than reasoning. Fascinating. Salim, any thoughts here? It feels to me like we moved geo-politics from nation states to these hyperscalers. I mean, this is incredible stuff that's happening from each of these big four or five, and it's rewriting all the rules of everything. Then you have states riding on top of these,
Starting point is 00:17:42 which is a much better way of doing it than the other way around. Alex, I want to take it back to our subscribers here. What does this mean for the average individual who's not using Opus 4.5 to code? What's the, this is the inner, I mean, I think there are multiple levels of impacts that the highest level of impact is I've spoken in the past about what I call the innermost loop of civilizational progress. This is the innermost insofar as we're starting to see models that are so strong that they can conduct research and generate code for better versions of themselves. That's the innermost recursive self-improvement loop. That, I've argued in past, that's going to spin out and touch the rest of the. economy. It's already in progress, but you'll see much more of it over the next two to three
Starting point is 00:18:29 years as it solves fully robotics, physical world automation, leading to optimistically radical economic growth. So I would say that's the macro story. The micro story is, in the meantime, it's going to be trivial to generate programs, applications, complex workflows on demand, implicitly, explicitly, mentioned in past, with the length of a tweet, you'll be able to create a AAA level first-person shooter or video game. People are going to be creating so much software, so much more software, so trivially, that will be drowning in AI-generated software of very high quality. That's the narrow micro-impact in the short term. Amazing.
Starting point is 00:19:14 Please go ahead. Matt. The one other thing that's interesting in this in that by itself, it scores 75% in multi-agent when it's the same agent, Opus 4.5. When they combined it with Haiku, which is a very low-cost agent or sonnet, they got up to 88%. So Opus is a really good orchestrator of agents, and this is the multi-agent support type of thing. And everyone was saying, well, agents can't look after other agents. This is the first agent or the first AI that provably can. and that opens up the whole swarm nature of things that we've been discussing. It reminds me of a, there's a competition that people do call the spaghetti competition
Starting point is 00:19:53 where they are looking to put, you know, uncooked spaghetti and see who can get the highest vertical height. And they found that the team that had a very efficient executive assistant as part of their team always scored the best. This is the marshmallow challenge. The marshmallow challenge, yeah. Here's the show. You get 20 sticks of spaghetti, a yard of, a meter of tape.
Starting point is 00:20:13 a meter of string and a marshmallow and you have to structure something so that the marshmallows on top and whoever gets at highest wins. Importantly, you're right, Peter, the winners are the folks that have an EA on the team. But the second place are kindergarten.
Starting point is 00:20:31 And last place, last place are MBAs. Consistently. They lie, they cheat, they kind of break things, etc. It's an amazing exercise. You were going to make a second point on this one, you know, I think. You know, if we went back to the beginning of the year, and could we have, did we predict that we'd be here, or is it moving faster than even the few of us could predict? I feel like it's moving faster than the few of us could predict, although Alex is really like, I'll pat myself on the back narrowly for this and say, as Dave, who's not here at the moment, would attest, but I think, Peter, you were in this group chat as well, the beginning of the year.
Starting point is 00:21:12 Dave challenged me to formalize a prediction for what end of year solving math would look like. I was banging the drum throughout the year. Math is going to get solved. Math is going to get solved. I made very specific prediction about Frontier Math, Tier 4, and AI models passing that. And if anything, they've slightly overshot my very conservative baseline. How dare they? I know.
Starting point is 00:21:35 I think we're more or less where I expected we'd be by the end of this year in terms of strength of AI models solving. math science engineering. I'll add one last thing on the Anthropic story here, which is last pod we talked about their economic success as a business, that they're heading towards significant profitability in the next two years. And this is part of that equation. So congratulations to Dario and his team on Opus 4.5. Let's go to our next story again on the leaderboards, and I'll turn to Alex for this, the ARC-H-EI leaderboard update. And this is not just performance. This is performance per dollar, Alex. That's right.
Starting point is 00:22:14 So the big story, we're driving the cost of intelligence to zero. The cost of superintelligence is being driven to zero as well. The ARC AGI 1 and 2 benchmarks are really lovely benchmarks I've supported in past. The general theme is can AIs successfully visually reason and visually synthesize new programs to reason? And what we're seeing for the first time between the opus, 4.5 results that are demonstrating breakthrough state-of-the-art level cost efficiency of visual program synthesis. And then also an earlier result, I don't think we got a chance to touch on, which is company named Poetic has announced superhuman level performance on the ARC-AGI2
Starting point is 00:22:57 benchmark. We're seeing visual program synthesis start to get solved. And with the world needs harder benchmarks. We need harder, better e-vals that the so-what for everyone, right now is so many problems in the world, especially in the physical world, rely on some sort of visual reasoning, some sort of intuitive ability to manipulate the physical world and to spot patterns and to synthesize implicit programs, even if they're never written down a source code. And Arc AGI, Arc AGI 2 are really like excellent ways to capture problems that humans generally find easy, but AIs have historically found challenging. And that's all getting solved now and saturating. You mean things like proprioception, for example? Yes. And being able to,
Starting point is 00:23:45 in general, recognize a pattern and be able to solve it visually. Imai, your takeaway from this one. Yeah, I think even the authors of Arc AGI are like, what on earth do we do now? I think I've seen some tweets from them saying that. I think the next benchmarks are dollars. So you have Vending Bench and some of these other benchmarks where it's like how much money can they earn. You start to see trading benchmarks. You've got the tip-off point now where these models go from very smart people, you tap on the shoulder that can do individual tasks to being able to do real economic work. And we'll see many more benchmarks where the axis is literally dollars. And that's the next year story, I think. How far are we guys from, you know, the single entrepreneur with a set of
Starting point is 00:24:31 agents building a billion dollar business? What do you think? think you mind i'd be surprised if it wasn't within two years probably next year there are some amazing entrepreneurs out there and their only thing was how do we scale talent that listens to me and given they'll be good at using these again it's a year or two away at most Alex i think it's nowish in the sense that right now as i remarked in past you you see these poor baby aGIs that that have some agency sort of peddling alt coins on X. I think the first zero-human or half-human billion-dollar startup is probably for better for worse, probably unfortunately likeliest to be a baby AGI that pumps an alt-coin
Starting point is 00:25:21 that becomes worth a billion dollars. And I think we could do way better than that as a civilization than pumping alt-coins. but I think that's probably, unfortunately, where it's going to happen first. I would have gone for porn rather than pumping alt coins because it's such an obvious place where people have list of residues. But in terms of the broader picture, there's a colleague of ours that we all know who launched 47 AI startups in a month a couple of months ago. So people are now kind of using this as a platform to really kind of.
Starting point is 00:25:58 of change the game and just whole incubators are just launching AI startups. So I'm going to make a point that that's already in the works and it just has to see hit a market segment. So here's the question for you then, Salim and Imai and Alex, which is, is this just going to accelerate the rich, poor divide, right, in terms of the ability for now single individuals who, let's face it, who are 21, 22, 23 years old, just at of MIT, instead of Stanford, able to launch something, create extraordinary wealth at a pace that doesn't need other employees as part of their team? Well, what's going to happen is you'll have that happen, but the rich, poor, the ability to
Starting point is 00:26:43 go from poor to rich has never been faster. And, you know, this is a really important point that I think you pointed out in abundance, Peter, that the richest people in the world used to exclusively have inherited their wealth. and today the richest people exclusively have earned their wealth. And that loop is going to just accelerate. And now you're going to get 100 Vitalics and Sam Altman's, etc., thousands of them just springing off companies. The bigger question, I think, is what happens to the broader economy when this happened? Yeah, economy 3.0.
Starting point is 00:27:16 Alex, please. I think all of this capital substituting for labor discussion misses an important point, which is these AI agents are arguably neither. capital nor labor. They're a new third category. And everyone who's hand-wringing, and I hear this a lot, oh, well, like, how are we supposed to survive? Not everyone wants to become an entrepreneur. I would argue a near future where everyone survives by, quote-unquote, becoming an entrepreneur, misses the point entirely. It's not, I would expect going to be the case that everyone becomes an entrepreneur. Everyone's going to become an investor. The entrepreneur is increasingly
Starting point is 00:27:57 are going to be these AI agents that are identifying and solving valuable problems. And humans, the average human, average unaided biological meat body human, is going to be able to invest in fleets, in entire economies and indices of AI agents that are acting as the proximal entrepreneurs. This is the accelerando premise also. Exactly. Okay. Another, another instigation.
Starting point is 00:28:26 I think I might just say something about this, because he's been the accelerando. and studying the economy. I am. I'm going to go to him next, but this is, you know, this is Accelerondo as... I don't mean to butt in on the hosting. Oh, Salim, you do a beautiful job as well. But this is the Accelerando playbook for people to read. Imad, over to you, pal. You've been thinking about this very deeply. Yeah, Accelerando is a great book. Obviously, my book, The Lost Economy is also great. But the problem is that it's going to be very difficult to out-plan and out-compute something like Claude Five.
Starting point is 00:28:57 when it comes to coming up with businesses, unless you have skin in the game and you care. This is the main thing, because they will try things dynamically and they'll just move on efficiently, whereas you can apply these agents to tasks. And the key thing is, in economic terms, it's all variable cost. Normally when you had a company, I had to go and hire someone. That's a pain to do. You know, I had to launch my own servers before the cloud. Now everything is variable cost, and it's also cash flow positive,
Starting point is 00:29:25 because you typically pay the AI providers a month or two after when you have an enterprise contract and you charge people up front. So you can have brand new economic models where you're taking information, organizing and adding value to people. And I think that does close this rich, poor divide because you won't know where companies are coming from. And the compliance and everything can be done automatically now. We're actually seeing around some of these big AI startups, entire things that will do your tax compliance, that will do your financial forecasting, that will, automatically balance payments and things like that. And the stack is nearly ready.
Starting point is 00:30:00 Again, it's about a year away before you can launch a business probably in minutes. Amazing. With everything there. Can I mention something here? Of course. A plug for EXO here, which we stumbled across accidentally. Peter Reeve quoted Jeremy Rifkin's book, The Zero Marginal Cost Society. Yeah.
Starting point is 00:30:17 One of the, about three quarters of the way of writing the EXO book, we stumbled upon this economic kind of insight. When you're running a business who worry about demand and supply and hopefully the cost of demand and the cost of supply, hopefully you're on the right side of that equation, what the Internet did, it allowed us to drop the cost of demand exponentially, online marketing, referral marketing. Every company is trying for a viral loop. If you get there, your cost of acquisition goes to zero, which is an amazing thing. We saw an initial wave of YouTube, Facebook, et cetera, explode out of the gate with that. What exponential organizations and new models have done is drop the cost of supply. exponentially, right? So you think about Airbnb, the cost of adding room to their inventories near zero. If you're high, you have to build a hotel. And with the launch of Amazon web services, you could take computing off the balance sheet and make it a truly variable cost, to EMA's point. Everything now becomes a variable cost. You have almost no capital expenditure. So now you take out the denominator, the market cap explodes. And for the first time, you have a
Starting point is 00:31:18 breed of organization that with low cost of demand, low cost of supply. And that's like a magical holy grail for business. And how we navigate that is going to be unbelievable over this next few years as this paradigm rolls out. I love it. All right. I'm going to jump into our next story a lot still to cover. So chat GPT introduces shopping research. It compares and searches and provides sort of recommended products that you're interested in. There's no question that this is coming out on Black Friday. They are moving this quickly. This uses ChatGPT Mini. And their claim is that they're able to get accuracy of sort of best predictions at what you want to buy up to 64%. So for me, this is about replacing the search engine, the affiliate blogs, YouTube reviewers, or Amazon's
Starting point is 00:32:11 own recommendation engine. It's AI replacing the entire product research economy. In this one, at least from my perspective, Alex, I'm curious, the middleman's going to lose and the models are going to win. What's your thoughts? Critically, not just generalist models. This is a specialist vertical agent to the extent that there was some expectation that we'd end up in a singleton near future where there's one generalist agent that does everything. It appears like that's not the case, at least it's, to the extent that Open AI is a leading indicator, by my count, Open AI has launched at least two major vertical specialist agents. They launched research, deep research, originally,
Starting point is 00:32:57 which is general research, and they've launched coding agent, codex. And this is the third-ish vertical agent, by my count, that they've launched other than the baseline model. And I think it's really interesting. Where are the generalist models? I mean, yes, they're, they, other Frontier Labs are launching generalist models, but we're starting to see proliferation of specialist models. I think we're going to see many more. Wouldn't be surprised if we see more specialist post-trained models for finance and for medicine and for management consulting, just picking off broad industry verticals one by one in this case that's going after consumer purchases. But of course, I don't want to be calling on a particular model. I just want my AI to do this
Starting point is 00:33:41 for me, right? And it will, yes, they're throwing a lot of resources at model routing and routers in general. So what you'll gain with this umbrella router, I think will be a single pane of glass, a single UX surface that you talk to. Love that. I have a question. Yes, Lynn.
Starting point is 00:33:59 How far are we from a, you know, Peter, you call it Jarvis, right? A personalized layer that watches on your behalf is totally secure from a privacy. perspective and a sovereignty perspective and navigates the external world for you. So if you've got a shopping you want to do or you need to buy something or you need something that you may not even know you need, it's figuring out which of the agents to use and sorting it out. How far are we from that point? Now, now, I mean, what you're describing Salim is, I would argue, like a computer use agent, a CUA. And Microsoft and other major companies and Frontier Labs already have CUAs that are either about to be rolled out or have already been rolled out and are in
Starting point is 00:34:46 beta stages to do just what you described for a desktop. Now, I think what Ironman has is a CUA on top of heads up display in HUD, but that can come as well imminently. You see, what I find interesting here is the notion that we're about to give our AI's access to everything we read, everything we say and our attention and intention. So as soon as we get our heads up glasses or augment the reality glasses that are able to not only have forward-looking cameras, but actually cameras look back at our pupils to determine what we're staring at, right? If I'm staring consistently at that beautiful lamp over Alex's shoulder and my AI says, would you like a lamp like that. Or if I make some side comment to somebody else, it may purchase it for me and ship
Starting point is 00:35:38 it to my house. So this ability to understand what we truly want by listening to our conversations or looking at where we look empowers this AI to become our magical shopping agent in many ways. Imad, how do you think about this? Yeah, so Andy Jassy, CEO of Amazon, recently said, I believe, that their agent Rufus, which I doubt any of us have actually used, has 250 million users a shopping agent. And they're estimating next year, $10 billion in incremental sales from it, given conversion statistics up to 60% higher. Who would have thought? I think that the key thing is, where is it? You know, like Bing and teams and other things had access to the user's eyeballs. The challenge for chat GPT and OpenAI here is how do you become that first intentionality on the
Starting point is 00:36:28 shopping experience. And then what type of shopping is it? Because if it's toilet paper, who cares? You just want your AI agent to just do it automatically. If it's super discretionary, some people apparently enjoy shopping. Maybe not, you know, people like us, but definitely hit my wife and kind of others. And then you have this middle of it, like, how many TVs do you really buy? You know, you kind of know what TV you're going to get. So I think that the key thing is, who is the AI next to you? You go to Amazon for a shopping experience. you use a Rufus, you do a Google search, you now have Google AI mode up there. The key thing, I think, you know, going to Salim's point, is who comes up with the agent
Starting point is 00:37:07 that's the most charming and engaging in licenses Paul Bettany's voice, you know, for a Jarvis, because that can then disintermediate everything, and that's what the fight is on for now. Fascinating. All right, let's move on. Let's get to Google here. Google further encroaches Nvidia's turf with their new AI chip push. So Google has launched Ironwood TPU. It's their seventh generation AI chip with four times the performance of their previous version.
Starting point is 00:37:36 And importantly, instead of selling hardware, Google is now offering their TPUs as a cloud service. For example, META is running on them without purchasing the TPUs. In the photo here, we have Thomas Curian, who's the CEO of Google Cloud, who has been crushing it. Google Cloud's been doing amazing. And this puts them directly in competition with NVIDIA. Alex, how do you think about it? There's been so much hand-wringing, Peter, over NVIDIA's purported monopoly or Kuda as a purported architectural monopoly.
Starting point is 00:38:09 GPUs are now finally facing healthy competition. We see TPUs that are being both purchased according to this reporting as well as licensed and rented. We see, obviously, AMD with their own stack. We see Trania and other Amazon. chips and ASICs in general. And I think what all of this is turning into is, finally, accelerated compute is turning into a fungible commodity. It's not just a one supplier commodity. It is a multi-supplier, very healthy, very heterogeneous ecosystem of fungible, accelerated compute, which is exactly the sort of competitive ecosystem we want to find ourselves in.
Starting point is 00:38:51 Imai, do you have a comment? Yeah, so we used thousands of TPUs a few years ago. from the V5s, this now has 10 times the compute, and the chip size, the single die, the interconnectedness of the Google chips is beyond anything you've seen. So you've gone from 64 in a unit now to, I believe, 4,000, no, 9,000. So what Google's really, really good at is connecting lots of chips in one place and even multi-data center. We had runs of up to 50,000 of their low energy chips. And what that's important for is context. So right now, actually, DRAM prices have gone up by about five times. Yeah.
Starting point is 00:39:32 So if you want to get the DRAM for your gaming PC, it's gone up crazy. Google actually has the ability to use cheaper chips in massive scale to do large context window things. And that means that Gemini has a million, two million input tokens from video to audio to others, whereas it's still limited on other GPUs. And that's going to become even more of a difference going forward. Google originally built these chips to power Google search, and now they matured to a point where they can offer it to everyone, even hosted. So the cloud service has been available for a few years,
Starting point is 00:40:07 but now they're exploring actually saying meta, you want it in your own data center. We can look at that. And that's going to be super interesting going forward, particularly as RAM versus Flops becomes the key differentiator in terms of performance, because context becomes almost everything, because the models are already really fast, to be honest. Well, just kudos to Google. I mean, they continue to crush it week over a week.
Starting point is 00:40:31 Two plugs for us here. About three months ago, we said two things. One is that Google would inevitably start to lease or sell the TPUs. And here we are. And second, I believe I'mari was you a few months ago on the pod that said, invest in DRAM companies because DRAM is going to become the short supply, et cetera, et cetera. So it seems that we're typically three months ahead of the game.
Starting point is 00:40:58 Amazing. Amazing. Salim, that's great. This episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. Engineers start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and pre-compiles code for each task. Blitzy delivers 80% or more of the development work autonomously, while providing a guide for the final 20% of human development work required to complete the sprint. Enterprises are achieving a 5x engineering velocity increase
Starting point is 00:41:44 when incorporating Blitzy as their pre-IDE development tool, pairing it with their coding co-pilot of choice to bring an AI-native SDLC into their org. Ready to 5X your engineering velocity, visit blitzie.com to schedule a demo and start building with Blitzy today. All right, let's move on to the next story here. Amazon is spending up to 50 billion on AI infrastructure for the U.S. government. So it's projecting. It'll add 1.3 gigawatts of new data center capacity, beginning construction in 2026. So what's the story here? AWGD You want to take a shot?
Starting point is 00:42:25 Yeah, the government clouds, in many cases, including with AWS and otherwise, the government has its own availability zones. And they're notoriously undersupplied when it comes to accelerated compute with GPUs. And I think it's sort of surprising, given that the public sector is depending on how you count either half the economy or about a quarter of the U.S. economy, that how compute starved, or at least how GPU starved, it's historically been. So I think this is a welcome investment, at least from my perspective. We want a vibrant public sector and vibrantly supplied with accelerated compute public sector.
Starting point is 00:43:07 And I view this as a very positive step in that direction. Nice. Some of the stats here in this article, AWS is serving 11,000 government agencies and expecting to spend $125 billion in capital expenses by end of 2025. Massive support, right? AWS is really just dominated. Is this essentially the federal government saying AWS is their cloud provider? That's a big deal if that's the case, because that's what it seems.
Starting point is 00:43:32 Well, the U.S. government has multiple cloud providers. This is pretty well publicized and reported on. But Amazon is an AWS key supplier to U.S. government cloud resources. All right. I'm going to move us along here. Also, another article article on Amazon here, that their data, center tally tops 900. And we forget the fact that Amazon, because of AWS,
Starting point is 00:43:58 has been running a massive number of data centers around the world in over 50 countries. Launching now something in Indiana, it's 1,200 acre data center, and they're putting it up and getting it online faster than anybody else. Any particular thoughts on this one? I was struck by the fact that the Indiana one
Starting point is 00:44:17 uses 2.2 gigawatts of energy. That's like an unbelievable amount of energy for a data center. That's a small country with power. Yeah, I would just maybe note we're tiling the Earth with compute. That is what we're really talking about here, and this is just the opening act. And the Indiana Data Center in particular is we were speaking about Anthropic a few minutes ago. That Indiana Data Center is the core computing facility for Anthropic, both for training and for inference. It's called Project Reneer, and it was farmland.
Starting point is 00:44:52 that that was converted almost overnight. I mean, it took about a year, but almost overnight into modern compute. This is 1939 when you see farmland in the Midwest being converted to compute resources. Yes. Alex, you can't imagine the number of comments of I had from people saying, what does Alex have, why is Alex against the moon from our last podcast? Isn't it obvious? Isn't it obvious?
Starting point is 00:45:17 The moon's had it coming for years. We have an AMA section that we're going to hit in a few minutes, and that's one of the questions being asked. Don't we need to save the moon? It's lunacy, Salim, lunacy. Tushay, tusha. Oh, goodness. All right.
Starting point is 00:45:35 The third Amazon story here is Amazon opens $11 billion AI data center in rural Indiana. We've heard about this already. It's running 500,000 tranium two chips. So how does trinium chips compare to the TPUs, compare to the TPUs, compare to? to the NVIDIA's GPUs. What do you guys think about this? So we used a bunch of them previously. Trainium 2 chips are equivalent to the hoppers,
Starting point is 00:45:58 and they're good for inference, but they're much more difficult to do the large-scale training runs on. But if you look at the breakdown now, like you have a core cluster for training, and Anthropic just announced another big NVIDIA deal, 10 billion with Microsoft and NVIDIA. But for serving up of Claude,
Starting point is 00:46:16 you always hit those capacity constraints. And Traneum is very solid for inference, similar to how previously Amazon went all in with Graviton, which was their CPU equivalent. And now that runs massive workloads for Netflix and everyone around the world. So I think that it's still one more generation until Amazon starts to catch up. Again, they're about a generation behind. But all those chips are going to be used, but probably for inference versus actual training. I have a crazy question here.
Starting point is 00:46:44 So if your model is this closely bound to the chip, then if you did an inference model for any of these big hyperscalers on tranium versus tensor flow, do you get a very different result because the chip is different? No, so you typically use a framework like OpenXLA, which automatically translates it to different things once it's actually doing the inference because the process of inference is quite straightforward. forward matrix multiplications. The process of training, the training can be really complicated in the way that things move back and forth, etc. And that's where you really need to have high resilience, high interconnect, whereas it's a single chip or a group of 8 to 16 chips as these are. They're just doing forward passes. It's a lot easier to code and to have speed on. But again, there are certain things like cerebrus, for example, that will give you much faster inference or a highly optimized grace blackwell, etc. So that's as much simpler than training.
Starting point is 00:47:51 Yeah, maybe to expand on that, back prop is the key problem. If we could do away with back propagation at training time and have some sort of like magical, like I remember Boltzmann machines where one sort of concept for how we could do away with global back propagation. If we could do away with back prop entirely, then one could imagine in near future where training looks a lot more like inference, and training would be a lot more portable and a lot more parallelizable, but no one has yet in production figured out how to do away with back prompt. But aren't LLMs like fundamentally anchored the back propagation? A training time.
Starting point is 00:48:26 A training time, not at inference time. Inference time is only forward propagation. So if we could figure out how to train. But back propagation is fundamental to the training. back propagation is fundamental to training of neural networks for now but one there are lots of paradigms there's a whole cottage industry of researchers trying to figure out ways to eliminate back propagation entirely if we could eliminate back propagation that would certainly eliminate a training time compute bottleneck and by the way just as a reminder if you're listening and you've
Starting point is 00:48:56 just heard a conversation that you think is is being spoken in greek then my suggestion is join the club take some notes and go to your favorite L.O.M. Have a conversation. As Bruce Willis said and diehard, welcome to the party, pal. Yeah, I'm going to hit what you said earlier, Salim and Alex. I think the most significant thing about this is going from
Starting point is 00:49:18 farmland, the seven buildings in one year. That is so big. 2.2 gigawatts. I mean, it's just the beginning and we're knocking down regulations and capital is glowing in. This is continuing. All right. I want to get to our AMA. I want to hit a couple of stories on the science side real quick. We've been talking about launch costs. We've been talking about launching data centers.
Starting point is 00:49:40 We've been launching to the moon. I want to give folks a little bit of overview for a moment about how quickly the cost of launch has been changing. So the space shuttle, which was originally supposed to cost about $50 million per launch and launch 50 times per year, ended up costing somewhere between a billion to $2,000. billion dollars per launch and was launching anywhere from one to four times per year. Massively expensive, $50,000 per kilogram, super high cost. Falcon 9 comes in, drops the cost at least $20 fold to $2,500 per kilogram by making the first stage fully reusable, right? It's got nine Merlin engines, so you're recovering most of the engines on the Falcon 9. And then here comes Starship, which is reducing it, again, another 25-fold, $100 per kilogram. So, you know, how many kilograms do each of us weigh?
Starting point is 00:50:38 And what's your cost to get into orbit? It becomes affordable all of a sudden, right? So as Starship becomes fully reusable, but, and then Elon comes and starts speaking about the work of Gerard K. O'Neill, Jerry O'Neill at Princeton University, had actually designed and built, at least on the ground here, what are called mass drivers. electromagnetic rings that accelerate a bucket to lunar escape velocity, and just for the cost of electricity, which, by the way, on the moon is relatively cheap because you've got all the solar flux. You can accelerate something and shoot it towards the earth into a, you know,
Starting point is 00:51:16 Earth acquisition orbit. And we get here the price coming down not 100 fold, but a thousandfold to 10 cents a kilogram. So all of a sudden, we gain access to all the resources on Earth. I'd like to remember, remind people that everything we hold of value on Earth, metals, energy, real estate, all these things are in near infinite quantities in space. So the nine-year-old space geek in me is like super excited of what's coming. Alex, you want to add anything? Yeah, I'll add that disassembling the solar system is going to require low cost to orbit. So this is great. Alex, you're going to start protests outside our front door.
Starting point is 00:51:56 Dyson sphere in the way. I like this is generated by Nano Banana as well. You can see the little thing. Yes, of course. All right. I mean, basically this turns lunar launches and rocket launches into software. That same loop is now
Starting point is 00:52:13 hitting this. And it's one thing, two things here. One is importantly, this is a log scale. So for folks watching, this is like ridiculous orders of magnitude per level. And that That's unbelievable. In a very physical environment, this is not some social media gaming Silicon Valley play. This is getting out of the physical gravity of Earth's gravity well. This is nuts. This is energy, baby.
Starting point is 00:52:39 You know, the one complaint I have about my conversations with Elon is he wants to get out of Earth's gravity well and then go directly back into Mars's gravity well. You know, I'm far more interested in staying either in the, you know, Earth Moon system or better yet, build what some have called O'Neill colonies in which you are basis. I'm not disassembling the moon, Alex. I'm disassembling the asteroid belt. all those pesky asteroids deserve to be disassembled and used. I mean, sure, if you want to start with the asteroid belt, we can start there. Let's find training wheels for solar system disassembly. That's great.
Starting point is 00:53:16 No. All right. Listen, so disassemble asteroids and build large rotating cylinders called O'Neill Colonies, where you live on the inside, you know, Omega Square at R. All right. One more story before we get to our Q&A, which is a story from a friend, Matt Engel, the CEO of Parodromics. So Paradromics has been one of the, you know, significant number of BCI companies, brain computer interface companies. And what's interesting about them, they've just
Starting point is 00:53:45 completed their testing in sheep. Neurrelink did their testing in macaque monkeys. Parodromics has done their testing in sheep. And they're been approved to go into humans, which they will do in about about two months time, early January, February time frame. And I think what's most interesting is that they've been able to hit a speed about 10 times or actually 20 times faster than Neurrelink. So Neurlinks been at about 10 bits per second. The paradromics implant is at 200 bits per second. So Sileem, you and I've always talked about is Ray's prediction on high bandwidth BCI
Starting point is 00:54:27 by their early 2030s, 2030s, going to happen. So we're seeing all these companies moving forward here. A few years ago, I was a hard know on that. And now I'm like, oh, shit, he's right again. Alex, what are your thoughts here, buddy? Yeah, I think we're seeing that BCI space become competitive, which is great. Yes, we should all get our Ray was right hats. Fine.
Starting point is 00:54:54 But I think if you extrapolate this, one of my more fun thought experiments is, when do we actually get our nanobots in the brain for high throughput, more effect process type BCI. And it's interesting. You can look at the cost of producing a gigaflop of compute versus the typical size of a gigaflop of compute. So when Apple introduced the iMac, originally first iMac was like about a gigaflop. The first iPhone, first Apple watch, there's something magical about rolling out a form factor with about a gigaflock, you extrapolate out that curve, naively, assuming exponential progress for a gigaflop, saying like gigaflop, that's the threshold at which we have useful general purpose computing, including for the purpose of maybe even substituting for human brain cells
Starting point is 00:55:45 in the context of high-throughput BCI and or a very invasive uploading scenario. That curve, you get 2045, which is, again, hashtag Ray is right hat. that's when you get about a gigaflop the size of a human brain cell. So I do think we're very much on trajectory for Ray is right style human mind uploading and invasive BCIs and non-invasive. This is a quasi-invasive BCI. I think we're also going to get like lots of wear. By the way, Ray.
Starting point is 00:56:17 We have Ray coming on. Yeah, Ray's going to be joining the pod in early January. Talk about his predictions for 2026. Also, we'll have Brett Adcock coming on. the pod to talk about. Wait, wait, wait, wait. We can't ask Ray about 2026. We've got to ask for 2066.
Starting point is 00:56:34 I mean, it's too soon. It's a waste of time. Too soon. We'll ask about all of it. We'll ask about all of it. We'll need a little bigger podcast. Yeah, well, we'll get one by then. Hey, by the way, we crossed 400,000 subscribers.
Starting point is 00:56:48 So thank you to all those who subscribed to push us over. Our next hill is 500,000. Then we're going for that million. Why? Because, Jet and Dax want them to get a million subscribers. This is, it's, you know, whatever chemistry we have here in terms of processing the news and making sense of it for others, it's seeming to really resonate up, the number of calls
Starting point is 00:57:13 and accolades and kind of feedback I'm getting. Thank you to. I'm not sure you're seeing the same thing. Thank you to our listeners for the feedback. We do read all of your comments. And in fact, we process the comments and pull out the questions. we're about to jump in to that segment with an AMA. But Imod, what are your thoughts on BCI?
Starting point is 00:57:34 I think that this year has been a breakthrough year. Next year, you'll see even bigger advances. We've both seen what else is going on behind the scenes. And I think it'll probably be one of the biggest investment areas in the next three years, actually. Because what could be better for solving the issues, but then augmenting humanity directly? I think, as Elon said, like the only way you're going to be able to keep up with the AGIs is to plug in. Yeah. And so it's going to be a geostrategic importance as well as a financial importance. There's something fundamentally interesting about the brain because we still really have little
Starting point is 00:58:06 idea how it works. But as long as we can interface with it effectively, that's very, very powerful. Like our memories are already now outsourced to our smartphones. We don't really use our memory neurons in the same way we used to. And therefore, we'll start doing that with more and more brain functioning capacity and releasing that load off that and putting using it to other things. So I'm really excited about what comes with us. Maybe just to add quickly to Salim's point, this is admittedly a bit of a hot take, but arguably we solved AGI, we solved super intelligence
Starting point is 00:58:36 without actually having a good mechanistic understanding of natural intelligence, I think it's pretty likely we're going to solve brain computer interfaces and maybe even whole brain emulation without actually still having a detailed mechanistic understanding of the human brain. You can get pretty far with phenomenology. Alex, do you think we can use A& to solve the hard problem of consciousness, the whole
Starting point is 00:58:59 quality thing? Yes. Okay. We want to have a conversation about that in terms of how that goes about. Let's take that offline. All right. Two quick points on this BCI. Number one, amazing people playing in this space, right?
Starting point is 00:59:15 Max Hodak, who is the co-founder of Neurrelink. Now is a company called Science. Go and check it out. They have a completely different approach to. interfacing between the compute world and your neocortex. Brilliant, basically using neural stem cells to grow nerve endings into the brain that wire together and fired together. And then Sam Altman invested in something called Merge Labs. It's still kind of underwraps, but we'll be hearing a lot more about Merge in the next few months.
Starting point is 00:59:53 So I have one final question. Ray's prediction on high bandwidth BCI is really dependent on having nanotechnology. And the question is, where are we on that front? I'm still waiting to hear some good updates on the ability to assemble molecules atom by atom, not with wet nanotechnology, which is biology, but assemblers like Eric Drexler spoke about. Alex, any thoughts there? I spent so many years chasing nano assemblers. I do think we're going to get to Drexlerian style, although even Eric Drexler had sort of a personal evolution.
Starting point is 01:00:33 We've chatted a number of times with him about this from sort of pure diamondoid style, quote unquote, molecular assemblers to then there was the nanosystems phase where it's not about self-replicating nanorobots. It's more about desktop factories that produce things. Here's what I think. I think by at the very latest, and this is in my mind like an ultra-conservative outerbound 2045, we get our Drexlerian nano-assemblers. I actually think we're far luckier to get them in some soft form, maybe to look like DNA origami, maybe to look like AI solving the Feynman Grand Challenge, which includes both computational and nano-robotic challenges.
Starting point is 01:01:14 I think we're likely to get some AI solution to Drickslarian early style, Drexlerian nanotech in the next 10 years. I don't think it's going to take that long, but at the same time... Those are 2035 date. Yeah, like everything gets solved in the next 10 years. I don't think you need to have that high bandwidth, to be honest. Like, we did work at Stability on Minds Eye, where we reconstructed images people saw from MRIs, which is incredibly low bandwidth. And if you look at the forward, backward diffusion past processes, what you're likely to have is, before you get to the full bandwidth, you'll have partial bandwidth that can effectively reconstruct brain processes with very little information.
Starting point is 01:01:53 And then you'll just run diffusion models to do that in a similar way to actually Sunday robotics and others have done things going forward. There was a project, hey, I got to throw this out. There was a project out of Japan called Dreamcatcher. And what they were doing was having you sleep in an MRI machine. And they're storing the images coming off your optical nerve and then replaying your dreams back to you the next thing, which was hugely unnerner. serving, you know. Very quickly on this one, Peter. You get, with fMRI, you get approximately a million voxels per second just streaming off. You can do high bandwidth decoding of thought with a million voxels per second. I just don't have my, I just don't have my portable fMRI machine to carry around. Yeah, but those are getting smaller. You will have one. By the way, there is a team that I've been talking to that seems to have a credible path for molecular manufacturing. So you're happy to connect them with.
Starting point is 01:02:47 I can't wait. All right, let's get into some of our questions from our subscribers here. Let's jump in. The first one is from David Bowman, 6224. David says, I'd like to hear AWG tackle EMOD's thousand-day prediction. So what does AWG think of EMOD's AGI and a thousand-day prediction? So, Imod, do you want to state your prediction first? And then I love to hear Alex's commentary.
Starting point is 01:03:13 Yeah, I was just saying that I think that most, Most human economic work is negative value within a thousand days, well, 900 days now left at most. Not that it will replace all the jobs, but definitely it'll be there for just any job that can be done on the other side of a keyboard or mouse. So that's a weaker version of AGI than in some cases. Alex. Yeah, I think the central challenge, as always, is defining what we mean with AGI.
Starting point is 01:03:40 I think if AGI means generality, I think we've had AGI since at the very latest summer. for 2020 when GPT3 and language models are a few shot learners paper came out. If AGI means economically, some sort of economic parallel with humanity, yeah, I agree that either it is the case, Shumpeter style, that we already have some sort of economic generality, for example, as parameterized by OpenAI's GDP Val benchmark, if you believe that benchmark economically general, AI is either already here or imminent, like the next few months, or if you have some other preferred benchmark for human economic output, it's probably imminent, if not already here.
Starting point is 01:04:29 All right. Let's go to the next question from Josh. Insert my standard rant about AGI. Okay. So, so. Incorporated by reference. So acknowledged. Thank you.
Starting point is 01:04:38 So at Josh S. 5937 says, what is the future of land ownership? in a future without scarcity. Land is finite. Will it remain the final scarce resource? So, Josh, it's a good question. The way I answer it is two different ways. Number one, we're going to be spending a lot of time in the virtual world,
Starting point is 01:04:59 and there you'll be able to gain access to unique virtual real estate. The second is you're thinking with a very Earth-centric point of view. There's the moon, there's Mars, there are massive O'Neill colonies built out of the asteroids, and we're going to start to see humanity migrate outside the Earth. Having said that, yes, Central Park, West apartments are still going to be scarce. Deeply disagree. I want to rant on this. Okay, go for it, Salim. So I did some fact-checking here.
Starting point is 01:05:31 It turns out there's about 16 billion acres of habitable land on Earth. That's about two acres per person. Okay? That's a pretty decent number. And that's habitable. Let's note that passenger drones are going to be. make difficult to reach areas very habitable. So that goes up to about 20 billion to 22 billion areas of habitable land. So technology will expand the amount of habitable and reachable land.
Starting point is 01:05:57 Still, if we get to about 10 billion, we'll peak at about 10 billion population by 2050 before we start dropping off. Still about two acres per person, which is a pretty decent number. And all of the technology is allowing us to get, reach that land more easily, make that more or land more usable. If you fly across India, the most populous country in the world, it's mostly empty. You see populations at the edge of the,
Starting point is 01:06:23 on the coast, the middle is kind of, there's nothing there. Same with Africa, same with the U.S. You fly across the U.S. There's nobody there in the middle. And I'm Canadian. There's nobody in Canada. So there's a lot of land that we can use,
Starting point is 01:06:35 and technology makes it much more accessible, temperature, HVAC, heating, cooling. The only constraint is energy and compute, as Alex would say. Amazing. That's a great point. I'd like to just give a practical example to listeners. Waymo is now basically legal across San Francisco, right?
Starting point is 01:06:54 And so that could completely just change where people live because you can just get into your Waymo and it will just take you and your kids anywhere. The prediction from, sorry, just to add to that, the prediction for Tesla, it would be about 30 cents a mile to get somewhere on the Robotoxy. That's like near zero compared to it. 10x drop from where we are now. Alex, and maybe, maybe to just add a bit of nuance to this. I think in the short to medium term, land is becoming post-scarious, as you say, we can
Starting point is 01:07:26 build up, we can build down, we can build on other planets, the important use case that hasn't been touched on, we're going to have so many humans, I think, uploaded in one form or another into the cloud, the cloud doesn't have the same concept of land. So I think short to medium term, land is post scarce. In the long term, I think the scarcity of land depends on whether AI economies have a better use for land than we do. If we do find ourselves taking apart the solar system, land could actually become really scarce in the end. Yeah. By the way, let's just talk one second about uploading.
Starting point is 01:08:00 I mean, when do you actually believe we're going to start to see human uploads to the point where you, Alex, say, okay, upload me. and there's this speaker that comes over, you know, this voice comes over, says, hey, Alex, I've been uploaded. You can off yourself now. We don't need your biological body anymore. I'm in the cloud. Thanks for the vote of confidence. I think we've already seen non-invasive uploading in the form of large language models.
Starting point is 01:08:28 Large language models are arguably sort of an upload of an ensemble of all of humanity. In terms of like individual uploading that's non-invasive, I, I think we're either there already in the forum. You know, Imad touched earlier on or alluded to, like, constructing foundation models from fMRI scans. There are a number of groups that are training foundation models from fMRI scans. Arguably, those are like low fidelity facsimiles, but non-invasive of human minds. I think we're going to get to.
Starting point is 01:08:58 Wait, wait, wait, wait, wait, wait, hold on a second. There are people training LLMs on fMRI scans? Correct. a number of groups now, including meta, by the way, really well-financed, really talented groups. Holy crap, okay. So the real idea about uploading? The implication is, so LLMs are trained to reproduce sort of behavior of humans, like fat biological meat fingers, tapping keys on a keyboard, uploading text to the internet. But with foundation models trained off of fMRI data, like a million voxels order of magnitude per second,
Starting point is 01:09:36 you can imagine pre-training a foundation model that basically encapsulates human thought, certainly for human thought decoding purposes. You get that. And fMRIs can track a single neurons of firing in real time. No. FMRIs are both, they're spatially low resolution, temporally low resolution. You get like one to two second temporal resolution, approximately one millimeter cubic spatial resolution at best. But nonetheless, it turns out to be enough for thought decoding. So, Alex, the concept around a true upload is can I actually map your connectome? Can I map for human, roughly not only the 100 billion neurons, but the 100 trillion snap the connections, typically done by slicing the brain into ever thinner slices
Starting point is 01:10:24 and using AI to map those interconnections there. It's a destructive process. Do you think we're going Really invasive right now. Yes, very invasive. Here's my brain. Slice it into a thousand pieces. Yeah, more like a billion pieces. So I think we're going to have reference non-human organisms. So Drosophila, major progress always.
Starting point is 01:10:47 Was done. Food flies. Yeah, was done. Mice about to be done. Like there have been like a few one to three cubic millimeters of mouse brain upload or scanned. And in some form or another to the extent that, the connectome is a proxy for uploading, done. I think mice overall are going to be done shortly.
Starting point is 01:11:07 Lobsters next, right? Lobsters are easier, interestingly. Mice are harder. I think we're going to see the full human high-res connectome probably in the next five years. Alex, aren't you an advisor to nectone? I'm not a formal advisor to necktome, but I am an advisor to a company, Eon systems,
Starting point is 01:11:26 that is working on solving human whole brain emulation and uploading. All right, I'm going to move us forward. here on to our next AMA question here. And then I want to close out with what are you most thankful for in 2025? So start thinking about that in background. So at Success Coach Cody writes, how do we prevent a world where millions fall into poverty before AI-driven abundance arrives? What are the real solutions for people who may lose their jobs long before long-term benefits of AI kick in?
Starting point is 01:11:58 at JNKind 5 asked a similar question, how often you often say AI will lift up people at the bottom. How exactly will that happen for those who can't meet basic needs like food, health care today? You know, we hit on this about two pods ago where there's concerns that we saw this out of the data from the FI-9 event, concerns about poverty, about losing jobs. about being able to support your cost of living. Salim, you want to jump on on this one? Wow, a ton of us. This is why 20 questions buried in each of these here. I think we are in a difficult kind of 10-year period
Starting point is 01:12:43 when we transition all of our world systems from scarcity to abundance, right? Consider the fact that almost every business in the world is focused on scarcity for the last 10,000 years if we didn't have scarcity, you kind of didn't have a business. We're moving now to abundance models, and actually exponential organizations find business models around abundance, which is the starting point of that transition. But for society or large, we need to move to some model, whether it's UBI or UBS. UBS being universal basic services, right, similar type of concept, where you just give basic capability and make that available to everybody.
Starting point is 01:13:23 solve the bottom two layers of Maso's hierarchy. And the trick for UBI, by the way, for people that are naysayers is if you can find the balance where people can survive but not be happy, you still have a very thriving economy. Entrepreneurship explodes in that model, et cetera, so we're not far away. The problem is governments and getting governments to move from a union, labor, job, taxation model to that is such a big leap. We don't have confidence in governments doing that. And the problem with government we have all over the world, is they want to be needed. They ran a two-year UBI in Manitoba in the 70s,
Starting point is 01:13:58 and it was so successful. At some point, the government realized we're not even needed here, and they canceled the program, so that it could be needed. That's the immune system problem in government that has to be solved. Just a quick thing for all the folks that emailed me saying, hey, how do I find out about that? I'm putting some stuff together. We'll send it out shortly to everybody.
Starting point is 01:14:17 So I tend to be in the optimistic side. Technology uplifts people at the bottom. people are leveraging technology to make more and more money in the short term. We've got lots of data around that. And as we get technology democratized and you monetized to a broader population, then everybody lifts up. And Peter, you and I talk, all of us talk all the time, but forget the richest people. If you can lift the bottom, that's the key.
Starting point is 01:14:41 And the bottom is being lifted very, very appreciably. You just don't see that. Yes, people compare themselves against their, you know, the Kardashians or whomever else. Imad, you've been doing incredible work here with intelligent internet on this specific problem. Could you sort of lay that out and give us your thoughts here? Yeah, I think as with many things in human life, this is a coordination problem, right?
Starting point is 01:15:07 Again, we have enough resource two acres per person, you know, food, healthcare, etc., to coordinate everyone. But we've always lacked the capability to do so because our systems are done. So we have projects like Sage, which we launched at FII, to do top. down policy. And really, the way that I've been thinking about it more is like AI social scientists. You know, like we talk a lot about AI scientists for biology, for chemistry, for quantum. AI social scientists to figure out economics, politics, implementation are going to be so huge. That's basically our SAGE project. On the other side, I think you need to have
Starting point is 01:15:40 universal AI given to everyone, a Jarvis that's looking out for people to help them navigate on an individual basis, because that's how they get access to food, healthcare, etc. The reason they don't know is because people are invisible, particularly the poorest of people. But the pace at which this is going to come over the next few years is going to be so intense that governments need to take a big step forward and say, A, we need to use AI to coordinate this, B, we need to get AI to the people, and C, we need to look at historical counterparts. And I think probably you need to look at the 1933 New Deal that came out of the Great Depression and others, because you might see entire industries disappear within a matter of days, months.
Starting point is 01:16:23 Like GROC 4.1 FAST just scored like 95% on Tao Bench, the customer service benchmark, and it's 50 cents per million words, better than any human. That would just no customer service jobs within two years, you know? Again, it takes a little while, but it's one way. So coordinate with AI, give everyone universal AI, and then layer services and coordination on top of that. You know, you're going to appear as the headline in some user article now. Imad, if you remember, you said no more coding in a year and like headlines across India. I think the point that Imad makes, that's really, the really killer point that Iman makes right there is this is one way.
Starting point is 01:17:01 We're not going back. Yes. We have to face the future that's coming and let's get real about it. Let's get data driven and evidentiary around it. And just freaking make it happen because it's not going to, you know, left to itself, we've got these two futures, a Mad Max feature. future or a Star Trek future, right? And you can see our politicians pulling us straight to mad max. We have the opportunity with technology to pull us in that direction. This is what this community is about. This is what we have to do. Alex, closing thought on this one.
Starting point is 01:17:31 Yeah, closing thought is, I think the central policy challenge is growing the overall economy much faster than the value of conventional human labor is destroyed or obsoleted by AI. So I'm primarily focused on ensuring that we can achieve radical macroeconomic growth. If we can do that, then making sure that UBI, UBS, or UBE, universal basic equity, or some other variant thereof, some door number four, I think those all become more a matter of policy decisions, but it's relatively easy to distribute abundance if we have abundance. Yeah. All right, we're going to close out on a question aimed at you, Alex. This is probably— I'm going to coin a phrase here. This just occurred to me, UBA, universal basic abundance.
Starting point is 01:18:15 There you go, Peter. Okay, I love it. UBA. And then that gives you abundance of a very interesting things underneath that that's all for all the others. Love that. All right, the final AMA. And please, if you're listening today and you say, I've got a question,
Starting point is 01:18:31 put it in the chat on this particular episode of Moonshots and we'll look for it. And if it's intriguing enough, I love to ask it to the moonshotmate. So at X-Finix 96 asks, hey, Alex, the moon and Jupiter should be off limits to mining. Don't they stabilize our environment? What are you trying to do, Alex? Start a reply. Gosh, we're having in all these worlds are yours except Europa attempt no landings there a moment.
Starting point is 01:19:03 I think if you've read 2010 by other C-FARC. No, I'm, I'm, so many thoughts. First is, no, we don't need to stop mining the moon and Jupiter to stabilize our environment. Jupiter does at the moment play an important role in protecting the inner solar system from or cloud bodies and other objects from the outer solar system. The moon does play for the moment an important role in the tides and other sort of atmospheric. And romantic love. For the moment. For the moment is doing the heavy lifting in that sentence.
Starting point is 01:19:38 So once we have the ability, which I think seems likely we will increasingly have to disassemble the moon and disassemble Jupiter. And assuming the solar system does go in that route, we will also have the ability to protect the inner solar system from a variety of astroidal bodies and to recreate the tides artificially. My favorite quote from you, Alex, is Saturn has had it coming for a long time. That's got to be an all-time, Alex, but it's true. Oh, goodness. All right. Well, asteroids represent a significant amount of mass, and I think they can handle our needs for at least a decade or two.
Starting point is 01:20:14 So, all right. If you like that, I want to close out with a question here. What are you guys grateful for having happened in 2025 as a closing gratitude? I'll kick it off. I'm super excited that humanoid robots have made so much progress. And they're a real, the capital is being invested, the manufacturing plants are being invested, and my own version of data or C3PO is on its way. Alex, how about you?
Starting point is 01:20:45 So many things, but I'll pick one. I'm grateful that math is credibly and defensively being solved by AI. That is, in my mind, such a canary that this is going to work. The singularity is in progress. we're going to solve all of the grand challenges to math, science, engineering, and medicine over the next few years. And math is just the tip of the iceberg. It's very exciting. Amazing. Salim. Again, a million things. I think three things pop to mind. One is I'm unbelievably grateful for this podcast. Peter, thank you for pulling us together. Oh, I am too.
Starting point is 01:21:22 Missing Dave a lot at this moment. And thank you. Just let's say it real quick to Nick, the Nick Singh, to Dana, the Gianluca, who helped this really be excellent. So thank you guys for that. I think this is like this radically optimistic, realistic view of the future is the most important kind of tonic for what's happening out in the world today. And there's kind of palpable relief from all the listeners going, well, thank God, there's something I look forward to every week or a few. That's number one.
Starting point is 01:21:55 Number two, I think I'm kind of starting to just wallow in gratitude on a near permanent basis to just thinking about what the incredible future that is appearing in front of us, driven by that inner loop that Alex talks about. I'm still a fan of the moon for the moment, so let's, we don't need to go there for a house. I think the third would be my EXO ecosystem is kind of finally jelling in a really powerful way. been like 10 years building this ecosystem. If I ever say in the future I want to build an ecosystem, please somebody get a baseball bat and take me behind a witchshed. It's unbelievably difficult, but it's actually now coming together in a very, very powerful way.
Starting point is 01:22:38 There's a whole bunch of announcements that we have. And finally, I'll do a plug. We're doing this meeting of life session where I will claim to answer why we are alive and how we live effectively. And so a link will be in the show notes for everybody. Tickets are selling fast for that. Nice. Eamad, where do you come out on your gratitudes? Yeah, it's a nice small one, Sleem.
Starting point is 01:23:01 Small question you're answering. I think that there's two big things. I go for a niche projects. Yeah, I think there's two big things. One is, I think we've had the technological breakthroughs and infrastructure breakthroughs to be able to build the AI social scientists to improve our infrastructure and finally coordinate as a species. And that is a huge thing that we'll start seeing rolling out and announced next year as well.
Starting point is 01:23:23 And number two, I think minus the hard line. we have all of the tools we need now for the holodeck. Awesome. We just got to put that together. I'm going to add one final gratitude to close us out here, which is the incredible progress being made on reaching longevity, escape velocity, right? The focus by all the hyperscalers and model builders on how do we understand
Starting point is 01:23:48 how to add decades of health into our lives, as Dario says, how do we double the human lifespan the next five to ten? 10 years. That gets me jazzed. You know why? Because I'm excited to see the Star Trek future coming our way. We're going to close out if you're listening to this versus watching it. Go to YouTube to watch this incredible outro music and video by John Novotny. John again. John again, you're going to see all of your favorite moonshot mates as Star Trek characters. Here, of course, is the opening scene with AWG as a Vulcan. As a blonde Vulcan with a ponytail.
Starting point is 01:24:29 Blind Vulcan with a ponytail tail. All right, let's check it out. And Sleem, once again, you look hot here, buddy. You look hot. All right. Enjoy. Beneath the silver moon, we hear the wild winds call. The map is torn to pieces, but we're setting out regardless of it all.
Starting point is 01:24:52 As your courage tight, we're giving everything we've got. As long as I get a phaser. For every path turns epic when you're chasing moonshocks. We're forested with secrets and mountain sharp and tall. We lead the hidden chasons and we never. and we never fear the fall. The storm may arise to test us, but we'll meet it on the spot.
Starting point is 01:25:34 And Holmes always finds the ones who take moonshot. Ah, that was epic. I just don't like wearing a red shirt on some of those planets. Yes. I don't know. If I can be likened to Picard in any way, I'm good. And Alex, of course, you're the science officer on all the missions here. Obviously. Everybody, I wish you an incredible Thanksgiving holiday to all our listeners, to my Moonshotmates.
Starting point is 01:26:07 Dave, we missed you on this episode. Looking forward to seeing you. We're recording again early next week. A lot going on. And we're going to be spending some time with Mustafa Saliman as well, the CEO of Microsoft. AI, and we're doing a podcast with him. A lot of incredible things. Get ready. 2026 is going to rock the planet. Hopefully not physically, but definitely emotionally and intellectually. Let's all wallow in gratitude the next few days. Yeah, beautiful. And stuffing and turkey. All right. Take care, everybody. Take care, folks. Every week, my team and I study the top 10 technology metatrends that will transform industries over the decade ahead.
Starting point is 01:26:50 I cover trends ranging from humanoid robotics, AGI, and quantum computing to transport, energy, longevity, and more. There's no fluff. Only the most important stuff that matters, that impacts our lives, our companies, and our careers. If you want me to share these metatrends with you, I write a newsletter twice a week, sending it out as a short two-minute read via email. And if you want to discover the most important meta-trends 10 years before anyone else, this reports for you. Readers include founders and CEOs from the world's most disruptive companies, and entrepreneurship. entrepreneurs building the world's most disruptive tech. It's not for you if you don't want to be informed about what's coming, why it matters, and how you can benefit from it. To subscribe for free,
Starting point is 01:27:31 go to demadest.com slash metatrends to gain access to the trends 10 years before anyone else. All right, now back to this episode. At Desjardin, we speak business. We speak startup funding and comprehensive game plans. We've mastered made-to-measure growth and expansion advice, and we can talk your ear off about transferring your business when the time comes. Because at Desjardin business, we speak the same language you do. Business.
Starting point is 01:28:10 So join the more than 400,000 Canadian entrepreneurs who already count on us, and contact Desjardin today. We'd love to talk, business. B.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.