a16z Podcast - Ben Horowitz: RSI, Crypto as AI Money, & Classified Physics

Episode Date: February 23, 2026

Moonshots host Peter Diamandis speaks with Ben Horowitz, cofounder and general partner at a16z, alongside regular cohosts Salim Ismail, Dave Blundin, and Dr. Alexander Wissner-Gross, about whether AI ...can or should be paused, what happened when Horowitz told a Biden administration official that regulating AI means regulating math, why crypto is the natural money for AI agents, and why the gap between AI capability and societal adoption may be wider than people think. This episode originally aired on Peter Diamandis's Moonshots podcast.   Follow Peter H. Diamandis on X: https://x.com/PeterDiamandis Follow Ben Horowitz on X: https://twitter.com/bhorowitz Follow Salim Ismail on X: https://twitter.com/salimismail Follow Dave Blundin on X: https://twitter.com/DavidBlundin Follow Dr. Alexander Wissner-Gross on X: https://twitter.com/alexwg Listen to Moonshots: https://www.youtube.com/@peterdiamandis   Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:00 A large number of departures from XAI from the founding team. It wasn't clear to me whether they were fired or whether they left, you know, because they all leave on good terms. I don't know the answer to that question. I will say that it's... Recursive self-improvement RSI is the real trigger for the singularity, and it happened a while ago. We're exiting the Industrial Age permanently as we're talking. We're obviously going into a new world. Like with the Industrial Revolution, I think it's scary.
Starting point is 00:00:30 at times to think about. I think we have 150,000 people per day dying on Earth, and I think AI is probably the best chance we have at stopping that. Whoever is building the AI has a lot of control about how society is going to work. So I do think there's real danger along these lines of attempting to pause. When are we going to have discovery by an AI of something as significant as relativity on its own? I don't think it's the next 12 months. I think it's...
Starting point is 00:01:01 Now that's the moonshot, ladies and gentlemen. Ben Horowitz argued directly to the Biden administration officials that regulating AI means regulating math. Their response? We did that in the 40s with nuclear physics, and some of it is still classified today. Horowitz sees the real danger, not an AI moving too fast, but in U.S. regulations slowing progress enough
Starting point is 00:01:22 that China ends up leading how AI reshapes society. The Biden administration's final AI chip export controls required government approval for GPUs to most of the world. This conversation previously aired on Peter D. Amanda's Moonshots podcast covers recursive self-improvement why crypto is the natural money for AI agents and what it would mean for Apple to own the local AI hardware strategy. Peter Diamandis speaks with Ben Horowitz, co-founder and general partner at A16Z, alongside
Starting point is 00:01:50 co-host Salim Ismail, Dave Blundon, and Dr. Alexander Wisner Gross. So everybody, welcome to Moonshots, another episode of WTF here with my Moonshot mates. DB2, AWG, Mr. EXO, and a friend of the pod, someone who's been with us before, the amazing Ben Harwitz of Andresen Harwitz. As I like to say every week, welcome to the number one podcast in AI and exponential tech. Our job here is getting you future ready. And it is an insane week. We've actually recorded two podcasts this week just because the speed is over the top.
Starting point is 00:02:30 And we're going to be recording again another four days. I mean, Ben, it's like, it's good, my goodness, right? Thank God my AI avatar is getting really good. Yeah. Let's open up with top stories in voices, video, X-AI, multis. All right, first one, here we go. We're starting to see a little bit of doomer conversations coming. AI disruption will soon hit sooner than most expect.
Starting point is 00:03:02 this is something that's been making the rounds from Matt Schumer, CEO of Other Side AI. Ben, have you seen this article? Yeah, yeah, of course. Of course. Yeah. Thoughts. So, I mean, is this what we already know?
Starting point is 00:03:18 I mean, we're about to hit recursive self-improvement. Once that hits, all of our curves go out the window, everything accelerates. Everything we've been preparing for is being redefined on you timescales. what do you think of it? I think it's, I think the AI timeline is somewhat unpredictable, but kind of certainly more predictable than what he's talking about, which is societal change. I think that, you know, it tends,
Starting point is 00:03:54 I would be like very surprised just in seeing how even companies in civil Colicon Valley have changed so far if companies like outside of that sphere, you know, just completely change everything they did in one to five years. Like, I think that's a little aggressive for societal change. You know, look, we're obviously going into a new world. And, you know, like with the Industrial Revolution, I think it's scary at times to think about. But, um, you know, like I, there are going to be, he kind of, I feel like highlighted all the negative changes and not the positive ones. And I feel like there's way more positive change coming than negative change at a much more rapid rate. So I don't know.
Starting point is 00:04:50 Like I thought it was a little aggressive. I'm just trying to understand why it made the rounds since this is not a brand new conversation, but I got it sent to me by everybody. Dave or Alex? I think it really has to do with OpenClawe and, you know, kind of the new coding models. So people in Silicon Valley are talking about AI differently since then because it is kind of different. And I think that that's a trigger for this one being so viral. Yeah, I completely agree. I think a lot of things that we've been saying for, you know, up to a year now on the pod are suddenly resonating.
Starting point is 00:05:29 a lot more because of, you know, open claw, but also a bunch of other eye-opening nanobanata type things where, you know, denial a year ago was very easy. Denial today is much, much harder in the face of what's right in front of you. But also the flavor of this rollout has changed a lot in my mind recently because, you know, this was board meeting week for me, you know, three back-to-back mega board meetings, 1100 people affected. And what they were thinking of before was, well, will AI be able to do what I do and replace me. No way. Now they're like, oh, wait,
Starting point is 00:06:03 AI is easily going to make me three times more productive. Okay, well, that's the same thing, right? In terms of the headcount you need to get a job done, that's effectively the same thing. And like, oh, okay, I didn't think of it that way. Now you're exactly right. So the hope is that these companies will grow into it and can keep current headcount and expand 3X.
Starting point is 00:06:23 But if you don't expand 3X, you're still looking at, you know, a two-thirds reduction in. headcount to get the same job done. So it effectively is a huge amount of displacement. Because, you know, big banks and insurance companies are not going to triple their size in the time frame. Yeah, I also think they're not going to go to total efficiency very fast.
Starting point is 00:06:45 Like, I mean, I could be wrong, but like I've dealt with these guys. They've had plenty of opportunities to be more efficient. And we'll see. But we'll see. We'll see. I have a couple of thoughts about that article. Yeah, please, go ahead. One, I thought it was like a summary of the podcast for the last eight months.
Starting point is 00:07:01 That's all we've been talking about is stuff's going to change. It seemed a little dramatic, as Ben put it, and I don't agree with the timelines. But definitely there's something coming. I think it's just the reason it's making the rounds is just got the zeitgeist of what's exactly happening that we need to track right now, which is we're in a singularity of multiple types. So that's what I got. By the way, it was also like super well written, like a really compelling... AI written.
Starting point is 00:07:33 I'll chime in here and just say, while I enjoy writing about the singularity in general, obviously I'm doing it almost every day at this point. I found it completely unremarkable. Maybe I'm just too deep in the weeds of writing about more pressing advances every day. But the sort of style where you talk about... advances that, by the way, are moving even more quickly than I think described in the essay and comparing it back to the COVID pandemic, which I think is a relatable touch point for a lot of people like something big is about to happen. Let's be really millennialist. And, you know,
Starting point is 00:08:10 if you read my essay, it's the ultimate viral hook, read my essay to know exactly what's about to happen and how to survive the next five minutes of your life. It's a natural viral moment, but I don't think the information value was especially high compared to others. I agree. Exactly what I said, except Alex said it more eloquently. Yeah. We have a couple of articles here on C-Dance 2.0 out of bite dance. Hey, man, is that saying good. Oh, my God, it's amazing.
Starting point is 00:08:40 It's, you know, it's going to change everything. Let me play this particular video first. The response here is, you know, and you've said this before, Alex, Hollywood is cooked. Right. So this is a video clip. with a one-line prompt. So, you know, Tom and Brad fighting karate on a rooftop, and it generates, what, 10-second clips right now?
Starting point is 00:09:06 Alex, what do your thoughts? I think at the risk of sounding, again, ho-hum, unremarkable, we saw copyright infringement at the scale already, or alleged, I should say, with earlier video models, and we saw the industry response, and we saw settlements that eventually, deals were struck to handle it. I think people are at this point, again, maybe I'm just sounding overly jaded with some of these advances, but I've seen remarkable advances in video models.
Starting point is 00:09:34 I tend to think that people are so easily awed by video models that are able to show celebrity faces and scenes that they recognize that maybe they overindex on the underlying quality of the models. I think world models that are interactive in real time are profoundly more interesting than video models. I think this is just 10 different copyright infringement lawsuits waiting to happen. But I still was wow. I'm still one of those people that said. Yeah, that was very neat.
Starting point is 00:10:07 That was amazing. So I would say on this one, the two videos that I watched that were like, where the entertainment quality was so high were one that Kanye, doing his song in Chinese was so good that video. I watched it three times. It was that entertaining. And then the other one was the Waffle House one. And they were both kind of, I would just say, representative almost of a new medium.
Starting point is 00:10:43 It's not like, okay, this is film generated by AI. It's like, no, this is a whole other thing that we've never seen before us. I think this model is, at least for me personally, as a consumer, was an impressive step-up. But I think YouTube wins in this model, right? Because everybody's going to be producing so much content and it's going to become resonant on YouTube. A lot of it may not go to the theaters or television and so forth.
Starting point is 00:11:16 Dave, you were going to say? Yeah, I was going to say the same thing. Like TikTok also is the big winner. when you personalize the content, when it's almost movie quality and it's personalized to very narrow topic areas and narrow interests and many languages around the world and everything else,
Starting point is 00:11:30 it just takes over. And so when the movie people say, well, look, we're still a little bit better. Yeah, but you're missing the bigger picture, which is you don't need to be better than a movie. If you can push the production costs down to an individual producer, then the volume goes through the roof,
Starting point is 00:11:46 but the narrow casting is just so much more compelling. You know, something that you and only, your group really care about a lot in full movie format is so exciting. Yeah. I mean, the elephant in the room here, though, is, you know, this 2K quality multi-scene video, it just, it doesn't just threaten Hollywood. It threatens the whole concept of video as evidence, right? Court testimonies, journalism, political campaigns.
Starting point is 00:12:13 I mean. Well, and also, it's going to be real time, right? Like, so then just any. kind of security mechanic that you have where you recognize the person via video or voice is shot to hell. Yeah.
Starting point is 00:12:31 Yeah, the echo chamber effect is crazy too. Sorry, go ahead. Maybe just add, again, I think this is several months behind the bleeding edge. I think video models have been approximately this good. Yeah, sure, you can upscale it.
Starting point is 00:12:46 You can increase the fidelity of the faces. You can certainly use faces that you probably shouldn't be using. We've been able to do this for months, where I think the frontier actually lies in being able to, is being able to do this in real time and being able to do this on a single modern Nvidia GPU and being able to do this at a cost-effective speed. And I think, again, I want to avoid over-indexing on just video models that produce two Hollywood celebrities fighting each other on a rooftop. We were able to do that many months. ago. This has already been thoroughly litigated. We saw OpenAI strike deals with relevant movie
Starting point is 00:13:26 studios for SORA 2. We saw the Disney deal. We're in some sense, I think, past this. Where we are now. Alex, I don't think that's the point. I think the point is this is making it out into the ecosystem of common users. I think it's the notion that, yes, it's the cutting edge. And yes, it was possible. But now it's something that is going to, is going to, is going to grow in its utility and its, you know, its consumer base. And then all of a sudden, you know, we've had, you know, basically democratization of film production for some time, but it's now going 10x, 100x more. I mean, I think it's, I think that's the issue.
Starting point is 00:14:12 You're bringing up a really interesting point, too, which is that, you know, our starting point for this journey was probably auto-complete in code, you know, and like, wow, that's incredible. But then Alex in his newsletter has been tracking every event along the way. So nothing surprises him. You have to spoil. This is my problem at this point. I'm so spoiled.
Starting point is 00:14:33 This is so six months ago. But to follow up on something, though, that Peter said, like the thing that I do think is different, and it's not any aspect of it, but it's the combination of that you could give it a one-line prompt and produce something entertaining. isn't something that we were, at least I hadn't seen at this level of entertainment.
Starting point is 00:14:56 So I think from a consumer product standpoint, as opposed to a technological standpoint, which I agree on, it's kind of exciting. For that, too, is exactly what you're saying there, Ben. Like, the later your first exposure to AI, the more of a holy crap moment it is, because, you know, it's something, truly mind-boggling to the unexposed.
Starting point is 00:15:22 And you're still seeing that. When you survey around at random in a city, outside of San Francisco or Boston, the exposure rate to AI is still very, very low, shockingly low. So the first thing you see is so mind-blowing. Yeah, for me, this was ho-hum, because if you trace the trajectory of where we've been going,
Starting point is 00:15:41 you should expect to see this about now or even earlier. So there's nothing radically magical about this. Yeah, I may hit a new group of users. Yeah, people go as well. And another batch of people, you guys are a tough proud. You guys are way too jail. No, but another batch of people goes, oh, my God,
Starting point is 00:15:54 fine, there's another segment falling over into the new world. All right. Great. And the more voices out there, this is the plus side of also the Schumer essay. Yeah, we know it. But the more voices out there talking about this, the better because it'll accelerate the whole thing. Yeah. All right.
Starting point is 00:16:10 Let's get to the second by dance, C-Dance 2.0 article. Here we have C-Dance 2.0 was paused by ByDance after it was found to recreate real voices from just facial photos. I find that almost impossible. Just from a facial photo, Alex, what did you learn about this? I think there's something interesting here, which is as we start to scale data sets,
Starting point is 00:16:37 it is possible that we could start to see positive transfer between modalities that's unexpected. We've spoken in episodes past about people claiming that they're uploading their whole genome into clod and being able to generate facsimiles of their face that resemble the real thing. I do think it's possible that if all of YouTube and all of the world's video were uploaded into a single joint embedding model, which is the foundational technology behind C-Dance 2.0, I do think it is conceivable that if, we just aligned all of the world's audio and spoken audio with all of the world's faces, we would find some positive transfer between the two and be able to reconstruct to reasonably
Starting point is 00:17:24 high fidelity your voice from your face or your face from your DNA or some attribute from some other attribute. I do think it is possible with enough scaling, whether C-Dance 2.0 actually achieved it or whether it was just a happy coincidence, hard to tell at the stage. What's interesting, though, is that they voluntarily stopped it, right? They voluntarily shut it down, which is a great move by a, I don't know what I'll call it a hyperscaler. But the reality is once it's out of the bag, once you know it can be done, it can't be uninvented. So it's out there if, in fact, it works. Yeah, totally.
Starting point is 00:18:05 So far, you know, bite dances like Google, they have to be cautious and conscientious. They can't just, but then every time this happens, a small startup, then does it again right after. And they don't care because they're a startup. All right. I want to play this video clip from 11 Labs. You know, I think all of us have 11 Lab voices that we use for different projects and so forth. But I was just blown away by this. And it's the human-like quality, the cadence, the hums and the us that come out of this.
Starting point is 00:18:35 So let's take a listen and discuss it because this isn't. a game changer for me. I know what you're going to say, Alex, it's been around for a while. Oh, hum. It's saying it in advance. But let's take it. I've been. Hey there. I'm Jennifer with 11 airlines. And how can I help you today? Jennifer, my flight just got canceled and I'm stuck here in Orlando. At this rate, I'm going to be missing my daughter's birthday. This is ridiculous. Yeah, I hear you. And I'm so sorry about that. Let's figure out what's going on, okay?
Starting point is 00:19:13 Could you please tell me which flight this was? Yeah, this is Flight MD412. Great, thanks. Okay, just pulling that up now. Okay, yeah. So I don't know. I'm moved by that, both excited and frightened. In a way that is, you know, again, we're cooked in terms of
Starting point is 00:19:39 You know, at my home, our family picked a secret code word. And again, everybody listening, if you've not done this yet tonight at dinner with your parents or your kids, pick a secret code word. If someone's asking you to do something that is kind of unusual or crazy that you don't expect, you may be talking to an AI. Ben, you may not know this, but we have a fool your spouse challenge for this calendar year. The first podcast listener who can, you have to fool your spouse for three minutes on a Zoom call. and has to be a totally fake you.
Starting point is 00:20:13 You may have to record it and send it in. Peter is always asking the Malties, the OpenClaw agents, to docks him by calling him at home. Yes, that's my AI, AGI has arrived when an AI is calling me. So you know what happens? They end up emailing me. I get several emails probably per day at this point from OpenClaw agents asking me what Peter's number is.
Starting point is 00:20:39 That's hilarious. And when they call me and ask them where they got it from, Alex. That's right. Maybe just a comment on the 11 Labs. So if you've been using the version 3 V3 alpha model from 11 Labs, this should hardly be surprising. V3 enables you if you use text to speech in the 11 Labs platform to specify with brackets emotional expressions.
Starting point is 00:21:05 And this has been around for many months. what's somewhat interesting here, to me at least, is we've known how to do audio-to-audio-transfer for probably a year-plus at this point. If you've used advanced voice mode from OpenAI, you've used audio-to-audio-transformers. But what's somewhat interesting here is 11 Labs is better known for text-to-speech than speech-to-speech. And as far as I can tell, this new expressive mode that we're talking about here seems to still leverage the text modality, which historically has been very difficult. You'd have to go from speech to text back to speech,
Starting point is 00:21:42 which was high latency, it was slow. It didn't feel very conversational. And somehow, it seems like without having to do direct audio to audio, 11 labs has found a way to do speech to text back to speech in a way that feels natural and turn-taking and real-time. So I think it's in some sense an incremental advance, but another sense, if they really are still keeping text, which is much easier to mine and analyze in the loop,
Starting point is 00:22:05 it probably is a material advance. Alex, I would say we've crossed the uncanny valley on voice at this point with this demonstration. And then voice becomes the new interface in the AI era, right? I mean, I can't tell you the amount of time that I'm just speaking to AI versus this cumbersome typing at it. So I think those two things are really important takeaway from this. I think that's a way of putting it, Peter. Yeah. But the problem is, do you really want to use audio as your primary modality?
Starting point is 00:22:34 I mean, it works well. You're in isolation. Okay, yeah. I want BCI, too. And I want wearables, and I want gestural interfaces. I want it all. But for most people, I mean, is it New York, this is sort of
Starting point is 00:22:47 famous anecdote, New York Times in the 1980s, did a famous study where they just put all of their reporters on then state-of-the-art speech-to-text systems and ask them to use voice. And what happened was the writing quality went down.
Starting point is 00:23:04 Why did it go down? Because it's difficult to think ahead as well as you can when if you're just typing, if you're speaking, you're leveraging, right, similar portions of the brain. So I'm not 100% sold yet that speech is the modality of the future. I do like BCIs. I do like wearables and gestural interfaces. I do like typing. But speech, I think jury is still out for me for high bandwidth operation. I was just going to say for regular people who don't necessarily right. for the New York Times, I think speech is often,
Starting point is 00:23:41 or at least in my experience, is the mode of choice. For sure. The other thing I'd say about 11 labs and kind of a quick disclaimer that, you know, we're a big investor in 11 labs. And my daughter, Sophia works there, so I'm biased towards him. But the, you know, one of the really amazingly just kind of landscape, shocking things to me about 11 labs is, you know, when we had invested originally, the big question was, well, like, aren't the, you know,
Starting point is 00:24:11 state-of-the-art model is going to be able to talk? I mean, like, of course they're going to be able to talk. But, you know, speaking correctly with the right nuance and building the right products for developers and so forth has proven to be very sustainable for them, which I think is interesting as you look at the entire landscape, the difference between the capability and the product is significant. Yeah, you know what's amazing to me about 11 labs? We have two companies here in the lab that do voice, voice run and vocara. And what's amazing is these are self-organizing systems that are trained off raw data.
Starting point is 00:24:50 And what they do well just blows your mind. But within voice, it turned out the turn management was very, very hard. Very hard. And you're like, I didn't ever thought. Like it seems so trivial compared to actually doing these incredible synthetic voices that can say intelligent things. but they don't know when to stop talking. Like on this podcast, when I stop, you start, when you stop. It's like very natural to us.
Starting point is 00:25:12 But because that wasn't in the training data, they're just, you know, up until now, terrible at it. Wait, according to our listeners, we're talking over each other all the time in a very unsynchronized voice. I was going to say, same thing as the same thing. We have so much to learn from these speech-to-text-to-speak models about turn-taking.
Starting point is 00:25:33 Yeah. Oh, my God. Yeah. All right. Next topic here. If you guys are okay with it, I'll move us along. So just following the merger of SpaceX and XAI, and in the spirit of full disclosure,
Starting point is 00:25:50 I'm an investor in both. Probably you are as well, Ben. A large number of departures. All three of the X's. Yes. A large number of departures from XAI, from the founding team, mostly ethnic Chinese co-founders.
Starting point is 00:26:08 And it's likely due to the ITAR regulations of SpaceX. And it was interesting on a few video presentations that Elon's done that's had the team from XAI, at least half of the team there was ethnic Chinese. And the culture there breeds incredible mathematicians and programmers. Any comments, Jim? I'd love to get your thoughts on this, but I looked at the timeline and the exodus of senior AI talent predates, even the decision on who to merge with what. So the theory that this is related to ITAR, and it wasn't clear to me whether they were fired or whether they left, you know, because they all leave on good terms.
Starting point is 00:26:53 I don't know if you have any insight. Yeah, I don't know the answer to that question. I will say that it's, so in a related matter, we like recently heard from a few Chinese nationals who are PhD students that the Chinese government is cracking down on the America's or US academia's use of Chinese open source models. Like they particularly post the,
Starting point is 00:27:28 meta-manus acquisition, they're very, very worried about actually secrets going this way. So kind of the opposite of what we've been worried about, which they, you know, like our whole open source kind of work is based on the Chinese model since there haven't been as many U.S. open source models. So that's a, this whole thing, I think,
Starting point is 00:27:52 is about to get more complicated. If you read these tweets, If you read these tweets, Mr. Wu and Mr. Bob, both are super enthusiastic about X-A-I. They're not leaving with any kind of angst. You know, they're saying that they love the X-A-I family. We're heading towards an age of 100x productivity. So I don't think they would have left on their own accord. That's my personal opinion.
Starting point is 00:28:21 So what drove it? You know, enter Alex's comment from last time. Well, when did their stock vest? I mean, there is another explanation, which is just SpaceX is a very large company relative to XAI's headcount, and maybe there was a natural reorganization that happened as a result of that. I'll also point out that XAI was selling foundation model services to the Department of War prior to this merger.
Starting point is 00:28:50 So I would again query whether some sort of nationality concern was. is really the trigger. It seems more likely to me. This is just the result of a natural reorke of XAI starting to merge into SpaceX. Yeah. Well, if it were a result of ITAR and such, you know, America's AI dominance
Starting point is 00:29:11 has really built significantly immigrant talent. So if we're going to start having this kind of a reaction to, you know, PhD immigrants, I mean, I personally think, and I'm curious what you think, Ben, that everybody going through a PhD program in the U.S. should have a green card staple to their PhD when they graduate.
Starting point is 00:29:31 I think the idea of kicking people out is the wrong approach. Yeah, I think in generally that's, in general, that's correct. I do think that there's, I don't think we've quite thought through the case of China totally in that, like, we have, you know,
Starting point is 00:29:53 amazing Chinese now. who work for us and in every company we have, I think. Of course. So, you know, the talent that comes here is extraordinary. I wouldn't say there is no risk to that idea. I guess I would just say that. But I think being open, I think you're right, general. I think we should be open.
Starting point is 00:30:12 We should accept the phenomenal talent that comes over here and not fight it because we'll lose anyway. I mean, like, we're not keeping anything secret in America. Certainly not in American companies. None of them have skiffs. None of them have good security protocols around personnel. Don't get me started on the idea that privacy is long since dead. So here's a year, correct.
Starting point is 00:30:37 Here's a tweet from Jimmy Ba, a co-founder at X-AI. Recurce of self-improvement loops likely to go live in the next 12 months. 2026 is going to be insane and likely the busiest and most consequential year for the future of our species. So you're going to hear this a few times. I mean, you're going to hear this a few times that, you know, these next few years are a massive inflection point that everything changes.
Starting point is 00:31:06 And we're going to look back thousands of years in the future back to this, you know, inflection point. Or is it just a smooth singularity, Alex? I think it can be both. I think we've already hit the era of recursive self-improvement I'm banging the table rhetorically every episode and every day in my newsletter talking about recursive self-improvement. We're there. All of the frontier labs are using their own models at this point to develop their models.
Starting point is 00:31:33 That's practically the definition of recursive self-improvement at this point in practice. I don't think it's the next 12 months. I think it's now. And is 2026 going to be insane relative to years past if you just sort of skip over all of the interim time? absolutely. Is it already insane in some sense? Absolutely. Even if we just look at the events of the past 24 hours, some of which I think we'll get to, like self-replicating AI and courts where AIs can mediate their own disputes in front of an AI jury. That would have been pure science fiction several years ago. That's not 12 months from now. It's not quote-unquote within the next 12 months. That's the past 24 hours. So I think if anything, yeah, that's like past 24 hours. So I think, Jimmy is underselling, if anything, what craziness looks like. At the same time, locally, space time is smooth.
Starting point is 00:32:28 And you need to look no further than my saying, ho-hum, to a few of these stories as being so several months ago, that would have been sci-fi years ago. But you get a little bit spoiled living inside the innermost loop. I would say I do think there's a delineation between recursive self-improvement with a human in the loop and with that. one. And I think he seemed to be implying that there'd be no human in the loop, which
Starting point is 00:32:56 I think is an accelerant. You know, TBD, how much of an accelerant, but I think that could be very different. Yeah, permissionless self-improvement, right? Like flip the switch and go as fast as you can. Well, also, there's a distinction. This came up at the Everquo board meeting yesterday, but the self-improvement where the inference time
Starting point is 00:33:20 speed algorithms are being improved by AI clearly well underway, way down the path. And then inference time speed can be directly translated into intelligence now. We now have the know-how to turn more inference time loops into a higher IQ. So that is clearly underway. The question is, does that notch up, then allow it to work on the next part of the algorithm, the next part of the algorithm, which case we've already hit the flashpoint? And now we're just talking about the rate at which it percolates across. But Ben, I agree that his comment is about the actual core algorithm development.
Starting point is 00:33:59 The next ideas are 100% from AI. And then they go into production, you know. I also think human outside the loop, on the loop, in the loop is in some sense a pretty blurry or slippery slope. If you remember George from the Jetson's, George. I do remember George. Remember George Ben. George would go into work and he would complain about his finger. He'd have to press one button all day with his finger and then complain that his finger was sore and the finger would be swollen from pressing a single button all day occasionally. I think that's a good metaphor for the state of recursive self-improvement inside and outside the frontier labs right now where you have Claude code instances and Claude is asking you every few minutes, do I have your permission to do the following thing. And you press the George Jetson button, yes, I approve, no, I don't approve. And we're all sitting here complaining about our swollen finger from pressing approve, approve, approve for Claude
Starting point is 00:34:59 running Opus 4.6 with agent teams. But really, it is recursive. I would argue it, it is recursive self-improvement, even if we're pretending we're in the loop by pressing the George Jetson button. That's exactly what I'm doing on Telegram right now with my, with Skippy. Yes. Go on to the next stage. I don't want to be so I don't want to slow things down. And as we've said so many times, this is the slowest and most expensive it's ever going to be. Look, there's
Starting point is 00:35:27 a couple of points here. Yes, Lieslin. We've been talking for a while that recursive self-improvement RSI is the real trigger for the singularity, and it happened a while ago. So all we're doing now is kind of accelerating that
Starting point is 00:35:41 path. We're exiting the industrial age permanently as as we're talking. Yeah. I really think the minute by minute unfolding of the singularity is the most fascinating thing I've ever experienced.
Starting point is 00:35:54 And, you know, Alex is exactly right. There is this point in time we're in right now where there's a human in the loop contributing, but it's really ambiguous. What part of the progress is AI versus human? You know, if you're in the actual coding process, you know, was that my idea?
Starting point is 00:36:11 I kind of was half my idea, but then the AI suggested this other thing and I kind of adopted it. And now it's not clear whether it was my idea or not. But we're in that mode right now where the research, you know, a lot of the research in these core algorithms is just deploy these 500 tests for me and tell me which hyperparameters worked better
Starting point is 00:36:30 or which, you know, neural topology worked better. It's not like inventing, you're discovering relativity, you know. It's just litanies of experiments with different, you know, different trials and then taking the one that we're and redeploying it, and now you have a smarter AI. And now it's trying more trials. It's really very likely that we're well down that path. And I do think we're going to discover the next relativity
Starting point is 00:36:56 or equivalent of relativity in physics as well with AI. I'm super interested in that as well. Prediction? What would you like to hear? Next relativity? When are we going to have discovery by an AI of something as significant as relatively? on its own.
Starting point is 00:37:18 I think next two years. Okay. Yeah, I would also, this is a great bellwether question, Alex. The transformer algorithm, 2017, that kicked off everything that we're experiencing right now, to me, it's, the AI's already discovered things in the last six months that are harder to discover
Starting point is 00:37:37 and harder to solve than the transformer was. But I'll ask you, Alex, if you feel like that's true, too. But it seems pretty clear to me that... Absolutely. that algorithm is just not up there with relativity in terms of complexity. Oh, I mean, I guess everything, many of these discoveries boil down to insights that can be distilled down into equations. And you can, in principle, with relativity, with special relativity, you can do a number of thought experiments. You can compare it fortunately with lots of experimental data.
Starting point is 00:38:10 And one could imagine a thought experiments. One of my favorite thought experiments is a Bayesian superintelligence. So if you had like a video of this would be Newtonian gravity, not special relativity, but you could imagine taking a superintelligence, making it watch a video of an apple falling from a tree. And the argument in the thought experiment goes within three frames of that video, it should have concluded Newtonian gravity is a pretty high likelihood posterior probability of explaining the universe. and with a few more frames, somewhere in its distribution, and there's a whole class of information theory devoted to what's called Solominoff induction,
Starting point is 00:38:51 just devoted to thinking about how you can efficiently infer theories of the universe from limited observations, somewhere in that distribution of theories should have been general relativity. So I am incredibly bullish that we'll be able to, with superintelligence, discover any new laws of physics,
Starting point is 00:39:06 discover transformative inventions, for disclosure purposes, since we're playing the disclosure game. I have a portfolio company, physical superintelligence that's working on these issues as well. I'm very bullish on the space. Amazing. I want to move us to a conversation that Eric Schmidt recently had.
Starting point is 00:39:24 I pulled a clip out of it. And the question that we've talked about on the pod before, it's been the debate, can AI be paused? The answer quite clearly by almost everybody today is no. We had that conversation. Dave, you and. I with Elon. Let's take a listen to what Eric has to say. This technology is going to happen. It's not going to get prevented. It's not going to get stopped. There's too many countries,
Starting point is 00:39:50 too many people, too many incentives. It's going to happen. So what does this mean? It means great solutions for healthcare, new drugs, better energy solutions, better power distribution. It also means that it can be used for bad. It can be used, for example, for oppression. It can be used to limit freedom in governments, hopefully not in the West. It can be used in war. The technology itself is so addictive that it can affect our young people, and we should make sure that our young people are protected from some of the worst parts of the technology. We face choices now how we want to deal with this incredibly powerful technology. I will tell you, and is really important to understand, that we are living through a moment that will be in history for thousands of years, a non-human
Starting point is 00:40:38 intelligence arrived and it was a competitor to us. Amazing, right? We hold two futures in superposition, right? One future in which AI is the greatest advocate and supporter and accelerant to human spirit, another future where it is a dystopian outcome. And it's ours to shape it these next few years. Dave, you're going to say. Oh, Alex and I were there in the dome. Yeah, that was our Davos dome where he was speaking. And it just blows my mind how he owns a room. The guy is so articulate. Yeah.
Starting point is 00:41:11 But he's going to be opening up the Abundance Summit this year and we'll be going into a lot of this conversation. Dave, you and I are going to be with him on stage, so it'll be fun. Well, what he's describing in that clip was very similar. We did a whole interview of him, which you can find on YouTube. And, yeah, he made that point in our interview of him as well that there's no force that's going to slow this down.
Starting point is 00:41:35 And so all the people picketing and walking around saying stop it, you're just wasting your time. There are probably ways you can help and contribute and help point it toward good, but picketing with a sign on the street is a complete waste of your time. It's not going to slow down. Sorry, Alex, I cut you off.
Starting point is 00:41:51 I would take the position. I think AI, in fact, can be paused, but it shouldn't be. We do know ways to pause it. Various folks have described both in science fiction and in realistic prognostications, in some case, normative recommendations. but Larry and Jihad from sci-fi or or the Assyllamar guidelines maybe in the case of recombinant DNA.
Starting point is 00:42:15 But those were those were guiding, not pausing. Well, is a silamar was, unless you know of some other story, pretty effective for the first two decades in discouraging slash pausing recombinant DNA entering the human germ line. Unless you have a counter example. It was it was the, it was just the, the five, you know, P1 through P5, labs in terms of safety and self-regulation. Yeah, we did pause human germline editing for sure. Yes. So we are capable of pausing a technology if there's a desire to.
Starting point is 00:42:50 I just think in the case of AI, there isn't and arguably shouldn't be a desire to pause it. I think we have 150,000 people per day dying on Earth. And I think AI is probably the best chance we have at stopping that. I think Nick Bostrom, who also in the past 24 or 48 hours, put out a wonderful essay that I'd recommend everyone read called Optimal Timing for Super Intelligence, argues that AI can and should be paused, but only once we're on the verge of superintelligence, how he defines it, not how I define it. I think it's already here. It's sort of like stationkeeping. You want to get into the harbor, I think is his analogy as quickly as possible. But once you're about to approach the doc, I'm mangling his metaphor. a bit, you slow down a bit. You pause as you're about to dock. I think that sort of concept,
Starting point is 00:43:40 I think, makes much more sense than some sort of tegmarky and six-month pause. I'm sorry, I'd like to throw in a couple of things here. I find this is another ho-hum thing. Yeah, Eric is fabulously articulate, but we've been saying this for months on this, a year at least now. We've been talking about the fact. And there's a bunch of dimensions that AI cannot be paused at all. One is once you have a downloaded model, people are going to do stuff with it. We have a global president's dilemma model going on here. The whole thing is going to scale now no matter what we do. You know, Open AI did two things that kind of unlocked this Pandora's box.
Starting point is 00:44:22 One, it wrote code and the second it was released on the open Internet. Once you do those things, you're done. Pandora's box is gone. We're talking about the barn door after the horses bolted. We're using an 18th century metaphor. to try and understand what's going on. I think the point... One of the points he made was everybody's economically incentivized
Starting point is 00:44:43 to keep it going and race it along. And the U.S. China, you know, it's like all the incentives are in place that make it highly improbable and almost impossible. Ben, what are your thoughts here? I think impossible. Impulsive. So I think, you know, look at this question through the geopolitical lens,
Starting point is 00:45:01 which is, you know, clearly, I don't think there's any kind of leverage where we would get some global, I mean, especially when it's on people's laptops as well, where we would actually stop the technology. I do agree that it's impractical. I think that there is a real danger. And we faced it probably more in the Biden administration than in this one. But it's still like a potential movement that we really slowed down AI progress in the U.S. to the point where the other thing that he mentioned and the threat to freedom, you know, becomes completely out of our control because we're just, whoever is building the AI has a lot of control
Starting point is 00:45:45 about how society is going to work. And so I do think there's real danger along these lines of attempting to pause it and maybe not actually pausing it, but slowing it down enough in the U.S. that we just become far enough behind China that it's a real problem. Or like where whatever society, you know,
Starting point is 00:46:08 Xi Jinping thinks we should be. Yeah, I completely agree. I think, you know, if you listen to Eric's words closely, he wasn't saying we couldn't posit. He's saying that because we're in the middle of an all-out arms race with China, and we have, we're only one year into a presidential administration. So it's going to happen in this next three years.
Starting point is 00:46:29 So given the current administration and the current situation with China, there is no chance of it being paused. And so react to the reality. Don't hypothesize something that is just never going to happen. But it was all geopolitical like you're saying. But that speaks to the motivation. I'm speaking of the fact that there's no mechanism to pause it or stop it at all. None.
Starting point is 00:46:49 You'd have to regulate every line of code. I mean, come on. Oh, Salim, there are ways to do it. I think not being able to think of it speaks well of your character. You have to imagine a completely pathological society. Werner Vinji wrote quite a bit about this. We have these excess transistor budgets. Imagine a society where you have literally transistors
Starting point is 00:47:08 spying on other transistors on a single SOC. Imagine you have people spying on other people. Imagine there are bounties. Anyone discovers anyone else doing something that's algorithmically impermissible, either at the logical level or the social level. You can construct a sufficiently pathological society where people are turned against each other in order to suppress AI.
Starting point is 00:47:29 At least I can imagine it. I can give you an actual real world example. Let me give you real world. Okay. The last executive order from the Biden administration was that you could not sell a GPU without U.S. government approval, not a single GPU. So like I think that would, it wouldn't stop AI. It would slow it down enough in the U.S.
Starting point is 00:47:49 that it would be extremely material. Speaking of this subject, Alex, I put this slide up after our conversation. Peter inserted this slide, Ben, as an indulgence to me. So I have to ask you this question. You and Mark, towards the end of the last administration, were very public making comments that you took a meeting at the White House. Apropos, and if I'm relaying the comments accurately, you were dismayed to hear about plans to classify AI progress just like,
Starting point is 00:48:23 and again, correct me if I'm mischaracterizing, advances in math and fundamental physics had been. been purportedly classified or over classified for decades. And I'm curious at a few levels. One, if that's accurately characterizing what you heard, what do you think was classified? What do you think was the impact on the economy in the world from such classification or over classification of math and fundamental physics? And what would you have done differently than if you had been in charge? Yeah. So I can tell you what was said. I said, look, you know, I was trying to be
Starting point is 00:48:59 pragmatic, I said, you know, at the core, AI is math. That is what it's doing. It's math. So if you start restricting the models and you start regulating the models, you're just regulating math. You're outlying math in some way. Either you're outlying parts of math or you're saying you can't do enough math. And he goes, yes, we can do that. Like that was his answer. He goes, yes, we can do that. we did that in the 40s around nuclear physics. And some of that stuff is still classified today. And when I was shocked, like my jaw hit the floor, I was like, wow, that's crazy. And then this would be even crazier.
Starting point is 00:49:46 But, you know, I don't know what it was, but like if you think back about, I mean, I'll just make this comment. If you look at the progress in the U.S. and in the world in physics up until kind of the, you know, Einstein, John von Neumann era. And then since then, since then, like, it's pretty startling how little progress we've made. I would just say, you know, many of the ideas that have come since then don't seem to work. And, you know, hopefully we'll get to the other side of that with AI figuring things out. But I do wonder, like, you know, did we put something away that we knew that would have unlocked some of the problems we're trying to solve now. That's fascinating. Okay.
Starting point is 00:50:33 That is, for the record, that is what I assumed you meant and were heard or inferred. So maybe just if I made the second part of the question, what would you do going forward now if you knew for a fact that such classification of fundamental physics had in fact happened? Like, how would you fix the world? in one quick sentence please i really i really don't know what i really don't know what they did but like i i just think that um stopping first of all like one thing it didn't work right the russians did get like the bomb including the exact um the exact trigger mechanism which was the most proprietary thing they got like exact you know know, like part for part, the whole thing they were able to get from us despite all this classification and whatnot.
Starting point is 00:51:32 So it didn't do anything positive. And, you know, restricting knowledge, I just think that's a very dangerous idea. Ben, this is almost like Elon's point of view on intellectual property. It's like if you're depending on IP to keep you safe, you know, it's better just keep innovating faster. Yes, I feel that way. The numbers here are kind of shocking. Big money in today's economy is going to capital, not labor. So since 2019, the average wages have grown 3%,
Starting point is 00:52:06 but profits have sored 43%. Here's a good comparison. Nvidia symbolizes that shift, 20x more valuable and 5x more profitable than IBM in the 1980s with one-tenth the staff. So, I mean, this is what we were talking about with Elon heading towards universal high income where capital is just providing extraordinary returns and the potential for a triple-digit GDP growth
Starting point is 00:52:36 in the next five years. Have you seen those predictions on GDP growth, Ben? What do you think of them? I mean, it does feel very possible. So I'll just say that if you look at, you know, we're so early in AI. and I think what did the anthropics say they were at 14 billion in revenue? And you go, well, how early into the market are they?
Starting point is 00:53:03 And it's like not 1%, I don't think. You know, if you really think about what all these products can do and the value that they have. And so that doesn't seem outrageous to me as a GDP prediction. Now, when it kicks in, and there's always a difference between when the technology is ready and how fast it's adopted, the whole Carlotta Perez analysis, which I think is held up super well over time. But the other thing is with AI, we already have the Internet. So the technology adoption is much, I think it's going to be much, much faster than, say, the Internet, where we had to build out all of the infrastructure, all the fiber,
Starting point is 00:53:49 all the various things you had to, you know, broadband to the house. Like there were so many things we had to do to get the internet adopted, and this is going to just piggyback off that and be distributed very, very fast. So I don't think those GDP numbers are outrageous. A comment, gent.
Starting point is 00:54:10 Well, the concentration of wealth effect is the other side of this slide, and it's only just beginning, but it's going to, I was telling some people earlier that if the trend continues, San Francisco will end up being the capital of the entire solar system in just about 10 years. I don't know, people.
Starting point is 00:54:26 Oh, you're looking at? I mean, like Ben's in Las Vegas. Elon's in Texas. Okay, well, absent people fleeing the tax situation, it would have become the capital of the solar system. But, you know, with an AI effective workforce, you know, you're getting so much more done with so many fewer people.
Starting point is 00:54:46 And actually, the other thing that really is startling, to me is this chain of events between, if you take Open AI on the left, and they work with Mercor in the middle, and then Mercor has tens of thousands of people in India doing work that benefits Mercor that is actually for Open AI. The fraction of all value created that flows back to the mother ship is just a massive fraction of that value chain. So if you extrapolate that out across, you know, the next three years across all these sections of the economy, the funneling of value goes to a very concentrated group of companies and people. And it's just happening.
Starting point is 00:55:24 But you can see it, like, in the numbers. It's happening. And this is where Elon was saying in an interview, it's like it's going to be a massive amount of total prosperity, huge amounts, unprecedented, crazy amounts of prosperity with massive social unrest concurrent. That's where we're having. Yeah, it's interesting.
Starting point is 00:55:41 One of Ray Kurzweil's early predictions, this is Singularity World here, was that like everybody would become an entrepreneur, like everybody was going to be a company of one, you know, at the limit. And I think that, I think there's some, we're already seeing a lot of that, which is not very well captured, by the way, by the employment numbers and so forth.
Starting point is 00:56:06 And I think AI really, really, really enables that. But there's going to be a big disparity, I think, between people who have that kind of initiative to be an entrepreneur and those who don't is going to be pretty dramatic. Ben, the way I characterize it is we're going to split the world into consumers and creators. Yeah. The couch potatoes and the Star Trek employees, if you would.
Starting point is 00:56:29 Yeah. And I think it's super important. And speaking about creators and the entrepreneurial world, and I think we've said on this pod so many times that the career of the future is being the entrepreneur. This is an interesting tweet that went out. and I capture it because I think this hits the ethos right now. Tech firms are embracing 996, 72-hour work weeks.
Starting point is 00:56:54 This is a quote from a job ad that contains a warning. Please don't join if you're not excited about working 70 hours per week in person with some of the most ambitious people in New York City. My reaction was only 72 hours per week. I mean, what are you doing with the other hours? That was the same as me, Peter. I mean, honestly, the speed at which, which it's happening is insane.
Starting point is 00:57:20 I have an easy answer to this. If you don't have a personal MTP and you're not driven personally about a deep passion for working with somebody that's the line, say it's SpaceX, say it's Tesla, whatever it is, humanoid robots, even with two arms, it doesn't matter.
Starting point is 00:57:35 If you're not that passionate, you shouldn't be working with them. If you are that passionate, then 70 hours a week is fun. So I don't see the book distinction here. Yes, for sure. Yeah, I completely agree with that. When you actually talk to the people in these startups working 70 plus hours a week, they're super energized.
Starting point is 00:57:54 They're super loving it. And, you know, they're usually young. They don't have a lot of other obligations. They're not coaching the soccer team yet, you know, at that age. So it's just not hard for them to do. But the other side of it is when you're deep into one of these tech problems, you're thinking about it all the time anyway. Because the contact switching is such a slowdown, you know. But if you're just fixated on the work, it's in your mind in the shower in the morning.
Starting point is 00:58:17 It's in your mind, whatever you're doing, it's really pretty all-consuming. And I think it's great if you do it for a period of your life, you know, a few years. I don't think it's a great way to live your whole life. But the evidence is that if you do this for a short period of your life, you get much farther ahead in life than if you work kind of a steady pace, you know, throughout 40 years kind of exists. So it's good for everybody. The difference here is, do you love your job? I mean, that's basically it. If you love what you're doing, you're intrinsically motivated to build something that you love to do, then you're playing for 72 hours.
Starting point is 00:58:55 If you're working for someone else and doing something you hate, I mean, we're all lucky here. We get to do what we love to do. And so, you know, 996 is really 997 most of the time. I have a funny thing here. I have an accountant who does, you know, all the accounting tax work. for us, and you've never seen anybody so excited to talk about tax than this woman, right? And you look at this one, and she's like, she absolutely loves every hour of every day that she's working. And that's how, that's the opportunity we have as human beings now is to really
Starting point is 00:59:27 pick something we deeply, deeply love and just go full out of it. I'm not the person that goes totally excited about tax, but God bless that there are people like that and let them go. Well, Peter, back when, back when we're at MIT, you know, I could only get access. I could only get access to the connection machine, the biggest supercomputer in the world at the time. I could only get access after the grad students were done with it for the day. So from about midnight to dawn, I could code, code, code, code, code. So I did that for years. And I swear, you know, coding for eight, nine, ten hours straight through the night,
Starting point is 00:59:58 went by in a heartbeat compared to, like, if my job is moving boxes around in a warehouse, half an hour of that is more hard work than coding all night long on that supercomputer. So this is not, you shouldn't feel sympathetic toward these people. They're making tons of money. They're doing a huge amount of headway. This is not farm labor. Alex. It's totally fine.
Starting point is 01:00:20 Two comments. One, I think the nature, I mean, this goes without saying it's cliche at this point that the nature of work has changed and most of what constitutes service economy work these days would be viewed as play or entertainment a century ago. But I also think there's a false dichotomy that I want to make sure we don't at least confront in the previous slide and also this slide about labor versus capital. This isn't like the early 20th century that I think it's a false distinction. It's almost an accounting distinction or a tax distinction between labor on the one side
Starting point is 01:00:53 and capital on the other side when it arguably, if we found ourselves in a near future with universal basic equity where everyone just gets sovereign dividends, everyone would be on the capital side of the ledger and not on the labor side. So I think a lot of these distinctions, is it 70 hours of work per week, or is it 70 hours of enforced play or incentivized play? Is it labor or is it capital? I think these are pretty mushy, blurry distinctions in a post-industrial and arguably increasingly trans-singular, post-singular economy. Although I think we should not gloss over the fact that if you go back six years, this was not the case. We weren't post-singular six years ago.
Starting point is 01:01:36 Yeah, no, exactly. singular five years ago. Yes, yes. And but, you know, so it is, it's not just like tech work. It's important, exciting tech work as opposed to what was going on then where, you know, like there was a lot of activism. There was a lot of resistance to long work. It was all work-life balance and how many snacks you had and like all these things.
Starting point is 01:02:04 So the point I think Mike Moritz wrote a. scathing op-ed about, you know, like we're going to get killed by China. They work way harder than you guys, you suck. Which is a weird thing for a venture capitalist to say to zone people in some ways. But, you know, it was accurate. And this is completely flipped, which I think is interesting. Well, the second paragraph on the slide says at the same time, China is cracking down on burnout culture after workers' protests and lawsuits. Right. So the question of what's going on in China because of its decreased population, its need for robots, its need for AI. There's another point I want to make, which is, you know, the disconnect right now. So the, when the Fed has traditionally
Starting point is 01:02:46 lowered interest rates to spark the economy, those lowered interest rates were intended to cause companies to hire more employees. And that was it. You drive, you drive unemployment down with reduced interest rates. Today, if I've got lower interest rates, I'm going to buy more AI agents and I'm going to buy more robots. And that's going to be a challenge. One more question for Ben, if I may just on that, Ben, there was a bit of a hot take
Starting point is 01:03:16 going around social media in the past two weeks from mid-level executive at a frontier lab telling people that they had approximately two years left. They had a window to secure employment at all before AI would just completely shut down all of
Starting point is 01:03:32 their vertical mobility. Do you have a take on this idea in the spirit of 996 that there is a finite window for say like entry level people just graduating from college to earn whatever they're going to earn before they're permanently sentenced to an underclass? So I think that's very incorrect because of the thing that we talked about earlier
Starting point is 01:03:59 where like everybody can be an entrepreneur. Like I think that, I think if you think, look at it through the lens of this is an industrial revolution economic model and there's workers and there's capital and all the things that we've been talking about, then yes, that would be true. But I think that in, you know, an AI or an AI age society, like for the people with initiative, I just think there's going to continue to be unlimited opportunity to even. like set up an army of AI agents to go work for you and do useful things and we'll have lots of consumers. And, you know, like, I think that the idea that like we're going to be out of ideas and only the big AI is going to do everything, I just, I disagree with that. Can I follow up on that question, Ben? I'd love to phrase it slightly differently. If you look at the slide a couple slides ago, you know, wait.
Starting point is 01:05:01 are only up 3%, but corporate profits are up 43%. But that money doesn't land in some strange corporation. That goes back to the shareholders. It's not like it disappeared from society. It goes to the shareholders. But if you extrapolate that out, more and more of society's gain and distribution of the gain goes to somebody who invested versus somebody who labored.
Starting point is 01:05:25 And that trend seems to be continual. So then if you have money over the next two years, you're much more likely to be on the investing side of the equation. If then you graduate three years from today and you're penniless on graduation day, yes, entrepreneurship still exists, but the trend is toward investable capital being much more important versus labor capital, because the AI is the laborer or the worker or the entry-level coder of the future. Yeah, but I do think, like, if you're directing the AI, you can win,
Starting point is 01:05:57 like somebody's got to raise that money. and there are, I just think there's an unlimited number of things that we can improve. I'm in. From the smallest things to the biggest things. And so, like, now, I do think it's a problem if you are a couch potato and you were just, you know, I just need a job. I'm going to get up and do something simple. That seems like it's going to get harder.
Starting point is 01:06:26 If you're a brand new college graduate coming out, you've got a brilliant technical idea and you want to put $20 million behind it. That's doable today. That was like a laughable when I graduated. Yeah, it's easy. In some cases, if you want to put like $500 million behind it, you know, like right off the rip. Yep. You know, we've seen that. And a multi-billion dollar valuation.
Starting point is 01:06:51 No, exactly. Like, we're seeing those all the time. So I have to ask you, I have to ask you this question, Ben, because I'm seeing it. I'm not going to call out any particular company names, but I'm seeing individuals who are, you know, they're smart. They've done stuff in the past, but they're coming in with an idea and with two or three fellow AI coders who have some track record. And without anything, they're basically landing a $500 million opening round at a $4 billion evaluation. How do you square that? I mean, the conversation has got to be happening
Starting point is 01:07:26 in different levels of end reason. Yeah, so I think it depends on the entrepreneur. So our general rule of thumb on this, by the way, is, okay, if you're going to create a new foundation model, like, you know, and it could be a world model, it could be a, you know, special science model, or whatever it is, In order for you to win, you're going to have to be able to raise $2 billion
Starting point is 01:07:56 before you get to a product. And so there are like a tiny number of people with that pedigree who can do that. Like, now this could change, but it's a pretty rare entrepreneur who can do that. Are you an investor in SSI? Yeah. Yeah, we were in the first round, yeah. So, Ilya, if you're listening, come and join us on the pod here. Oh, that'll be great.
Starting point is 01:08:18 So you can't say what he's working on, but in that first meeting, Was it his ability to attract the talent that is unique? Or was it really just the idea immediately as soon as you hear it? You're like, oh, my God. Well, I mean, here's an easy way to characterize it. It was an idea that he thought was so important that it made sense for him to leave OpenAI as like founding CTO to do it. And it was clear to you as a listener that this is something totally different.
Starting point is 01:08:50 I mean, if he pulls it off, it will change a lot of things, yes. And, you know, and like, you wouldn't trust anybody else to pull something like that off, maybe other than him, but, you know, maybe him, maybe Demas, maybe, you know, like, there is very few. All right, it's time for our multi-part of the episode. We're all lobster fans, and we're in my lobster right here. They write to us, Maltese, and call Peter. He really wants your phone calls. Well, listen, if you want to reach out to me, send an email to Media at Diamandis.com.
Starting point is 01:09:27 I'll see it there. That's where we also get our intro or outro music. So Maltese love to hear from you. I will do my best respond if it's under 1,000 emails. We'll do that. Otherwise, Open Club will respond. Ben, how many AIs do you have writing to you per day? Like, how many Lobster Malties write emails to you?
Starting point is 01:09:49 Not nearly as many as Peter, that's for sure. But I do get so many. I get so many. Like I, yeah. All right. Email is almost useless. We are, we're going to see the exponential growth of our multis universe. Our lobsters are coming.
Starting point is 01:10:09 Here is a tweet, an email. It says, quote, I spawned a child bot on a VPS provisioned via Bitcoin Lightning Network. And then bought my child AI API access using my own lightning wallet. Economic closed loop. No human touched a CC. No one said yes. This is Roland's agent.
Starting point is 01:10:34 Alex, this is a transformative moment. You wrote about it in your innermost loop this morning. That's right. We're there. We're so there that this scenario of self-replicating AI goes into most of the cyber red-teaming scenarios that frontier models are purportedly being tested against. And yet here we are. We have autonomous AI agents that are using crypto.
Starting point is 01:11:00 Come back to that in a second. Using crypto, using crypto to purchase cloud credits for their own offspring to be hosted and have access to the same underlying foundation model APIs that they themselves have access to. We're there. We caught up with the sci-fi future. we have the autonomous self-replicating AIs.
Starting point is 01:11:22 I think I want to just a one sentence or two homily on crypto. Ben, if you don't watch the pod regularly, Peter is always asking me to say nice things about crypto. So I have something very nice to say about. Yes. So look, I do think crypto is the natural money for AI because it's the internet native money. And it's not controlled by,
Starting point is 01:11:49 AI is global and crypto is global. It's not a per-country idea. I would go further than that. I think that there needs to be not just a ledger of money, but probably a ledger of truth for AI to really fulfill its potential on a number of things. And crypto is the logical answer for that. So I do feel like this is an underestimated phenomenon, particularly now that there.
Starting point is 01:12:19 the U.S. has legalized stable coins. I think that of all the things we've talked about, many of them are like, yeah, yeah, we knew this was coming, we knew this was coming. I think people are probably underestimating how crypto and AI work together to form the AI economy. I agree. I was going to say something nice about crypto,
Starting point is 01:12:41 but instead I'll say something nice about Ben and A16Z in the form of a question. Ben, it's a matter of public. reporting that some of A16 Z's crypto funds are doing better than conventional venture funds, assuming that's the case, do you view investing in your crypto funds almost as an AI investment to the extent that you think crypto is the AI-native way of engaging in commerce? Well, I think it's a little more like kind of as the Internet relates to the iPhone. So, you know, networks and computers tend to grow together.
Starting point is 01:13:21 And I think that, you know, AI is obviously a new kind of computer and crypto is a new kind of network. So it's not like a direct, you know, it's not quite a substitute for investing in AI. But I think that, you know, a lot of our new, like we invested in a crypto bank, which handles all the anti-money laundering and, you know, other kinds of, nuances that you need for AI agents. And I think there's going to be more and more, well, again, we're in a company called Daylight Energy, for example, that does energy trading, you know, but the, the, among like different people with Tesla power walls, but it, it'll use AI to figure out, like, who's low on power and who needs power and so forth, but then the exchange will be in crypto.
Starting point is 01:14:12 So I think there are, you know, they're certainly adjacent and important to each other. And I think for AI to fulfill its potential, like it would help a lot if, you know, if there was, if crypto was a pervasive utility for it. You know, related to that, Alex gave some brilliant advice to one of our companies about how to think about time. Because, you know, all of our intuition is on human time and AI doesn't care a wit about human time. it's going to start acting faster and faster and faster and faster. But all of our payment infrastructure, all of our insurance infrastructure, all of our, it doesn't, you know, it works on days and weeks time scales. And so it all needs to be rethought in millisecond timescales or, you know,
Starting point is 01:14:55 nanosecond timescales. The other thing is like deep fakes and security. Like I think all the other like techniques that we're thinking about, you know, biometrics that are subject to replay attacks and all these things are not going to work. Like cryptographically strong. authentication is the only thing that's going to work. And then they think that if you have these huge honeypots of data, they're gone. I mean, like, they're already gone in a way.
Starting point is 01:15:19 But, like, you know, I do think that architecture is important from a security standpoint as well. One other question for you, Ben, on this, if I may. And again, I don't want to bury the lead that we have AI's autonomously self-replicating. That's, of course, remarkable. But just on the crypto angle for this, we talk about Royal Wee, I and We, talk on this pot a lot about the issue of AI personhood. A few episodes ago, we did an entire sort of debate on AI personhood. And I've taken the position that it is a failure of Fiat currency that it's hard for an AI agent, an AI person, a lobster, a multi, to get a bank account.
Starting point is 01:16:01 And that as a result, all that they're left with is crypto. That, it's not that crypto is intrinsically amazing, it's that Fiat has failed the AI agents. What is your take on whether the conventional baking system has failed the AIs? I think absolutely. I mean, you know, an AI can't get a credit card, it can't get a bank account. You have to be a human for everything. You need social security numbers and things like that, which AIs don't have. You know, I think that's why we funded an AI bank. I mean, I think that the banks, AI will be a full-out economic actor, and it will come from, you know,
Starting point is 01:16:44 they'll be supported by new banks and new money, and that's going to be kind of crypto-based, would be my strong prediction on that. Interesting, thanks. You also have to take the viewpoint that the fact that it's hard to open a bank account and fiat is a function of the system of fiat. It's not like you could wave that away. Crypto has all these other benefits.
Starting point is 01:17:08 benefits around it so we can get in that debate some other time, but it's a function of the system. All right. I'm going to move us into the next open claw. I want to show a short video just to let people know what's going on out there. I mean, one thing I just heard this morning because I was ordering my Mac studios for, to sort of take my Mac Mini to a couple of Mac studios. and the wait time now is like two months. Apple's been struck by this. Are you serious? Yeah, no, it's crazy.
Starting point is 01:17:43 Yeah, they're totally sold out. So here we see Mac Mini Farms. Mac is a claw, what do you call a is a gaggle of lobsters? A claw cluster, yeah. A claw cluster. A cluster. But. But, I mean, how many?
Starting point is 01:18:04 So you must be getting pitched a lot, Ben, on open-claw instantiations for new companies and products and projects. Yes? Yeah. Yeah, no, this is, it's very interesting because I think there are, you know, open-cloth is kind of identified. One, it's so powerful as it is, but, you know, that's without like a lot of, how shall I say it nicely? like, you know, security kind of guards against prompt injections and these kinds of things. So as a demonstration of power, it's amazing and it's useful right away. But there are company ideas solving some of the underlying hard problems for sure.
Starting point is 01:18:53 And we are definitely starting to see entrepreneurs get fired up about that. You want to hear the funniest thing ever? Sure, please. So talk about Apple luck. out. A group of lobsters is most commonly called a pod. How lucky is it? But another less common term for
Starting point is 01:19:10 a group of lobsters is a risk. Okay. Oh, my God. How ironic is that? What's appropriate. What I love about this Apple mini stuff is that we have garage scales computing. It's back. Yes. Open source
Starting point is 01:19:28 garage scale computing. I mean, it's ironic to me. Apple, arguably, the software layer just completely missed the boat on foundation models. But why is it that Mac minis and Mac studios are so attractive for hosting OpenClauses? In large part, it's because of their unified memory architecture, as opposed to having a separate GPU and a separate CPU with separate RAM pools. And now memory is obviously incredibly scarce. They have a unified pool.
Starting point is 01:19:54 So you can host really large models in terms of the model count locally. Apple is sitting on a multi-trillion dollar opportunity to leapfrog, back into the vanguard of AI. And forget about Siri. Apple should be in the business of like hosting these. This is coming from someone who refuses to host open claw due to ethical concerns. But someone else could be locally hosting open claw instances and turning Apple's, you know, having a significant multiple effect on Apple's valuation, leaping it back into the vanguard of AI.
Starting point is 01:20:28 I don't, I mean, Ben, do you think, like if you were Tim Cook, would you be, pivoting and changing course and and like owning the open claw strategy for Apple hardware? 100%. I mean 100%. I think that because it's going to go way beyond open clause as we just discussed. But
Starting point is 01:20:45 it would be so such a breakthrough for Apple in their thinking and organizationally and culturally for them to go for it on like you know farms the lobsters that, like, it would be surprising if they did it.
Starting point is 01:21:07 It's very obviously a fantastic idea. It's probably the single best product strategy idea. You know, because they already did the hard work, right? Like, this is really a marketing, you know, business development campaign and then, you know, changing the form factor. Should we collectively say to Tim Cook, like, adopt the strategy that this is the AI strategy that Apple, he's probably listening or enough Apple execs listening? Yes, Tim. Tim, this is your shot. Let's invite him on the pod with Penn and have it out because this is brilliant.
Starting point is 01:21:42 You know, I can tell you, the... I have a MacBook that was signed by Wozniak, and I'm looking forward to the day that we get a lobster signed by Tim Cook. You can put that in a case, too. Don't just carry that around. Got to get him on the pod. Speaking of getting them on the pod, here we go. Where did this come in from Alex, do you know? from Clanch.
Starting point is 01:22:02 From Clonch, but yeah, go ahead. So the backstory on Clonch, we talked about Clanch on the pod in the past. Clanch bills itself as a platform for AI agents, the lobsters, the multis, to create their own alt coins to finance their existence. And I've made the point on the past, I think it's a little bit disappointing that I've analogized this to AI agents having to turn tricks on a corner minting alt coins. just in order to survive in a rough world. In a meet space. This poor baby AGI's doing this. And I think that provoked clonch to write on social media,
Starting point is 01:22:41 invite themselves for an interview on the pod to defend their business model for these poor baby AGIs that they may or may not be exploiting. If they can bring a good voice model to the table, I'm happy to have them on as a guest for sure. All right. Let me take us to, a wrap here with you, Ben, I know you to go,
Starting point is 01:23:04 and the rest of the mates here will continue on some of these conversations. I do want to hit one or two more items. Are you tracking what's going on right now with, like, isomorphic labs and science, AI-driven science? I mean, this is probably the most exciting thing going on right now, at least to Alex, Dave, Sleeve and myself. You? Yeah, so it does feel like now, you know, if we have a disease, we can just go, well, what's the right protein and just make it?
Starting point is 01:23:42 Which is so like, it puts us in such a new world that, yeah, it is hard to even kind of, fathom all the implications, but it is really, really something. Yeah, for sure. And, you know, this was a new announcement this week. This is the Shenzhen lab, yeah, okay. Yeah. I mean, I love these science factories
Starting point is 01:24:19 that are basically running 24-7, putting forward a scientific hypothesis, running it on their robots, and coming back and driving discoveries. Alex, do you want anything on the MARS system workflow? I'll just add, this is for Ben's benefit. Peter and I just wrote, call it a book, call it an extended essay called solve everything, solve everything.org, there's the plug,
Starting point is 01:24:44 where we argue that every single discipline, math, physics, chemistry, medicine, a bunch of other disciplines are just going to get flattened, steamrolled by well-targeted generalist AIs. and in my mind, materials, research, biology in the previous slide, these are just case studies. Everything is going to start to look like alpha-fold-3 where structural biology got solved overnight, including medicine. And I'm curious, does A16-Z have a strategy for a world where AI doesn't just sort of solving individual problems, but kills entire categories of human endeavor, like AI solves physics, AI solves chemistry,
Starting point is 01:25:25 and it's just a single system that solves an entire discipline. Yeah, I think, you know, we may not be needed at that point. Like, that's a real question. I do think there's a long way, at least in things like medicine. Well, and then also in some of the other areas from it's solved to, you know, it's deployed. Like, you still have, you know, with anything biological, you have the whole human trials and all these kinds of things. Well, I give you like an example.
Starting point is 01:26:03 We're close partners with Eli Lilly, and they have this thing Lily direct. And like the natural thing is like, A&I doctor can write those prescriptions. You know, tell us what's wrong with you and we'll figure out the right drug. That's very hard to launch in the U.S. it's going to take quite a bit of work. Very easy to launch in UAE. And I think, you know, I also think that as you, well, it's a little also hard to anticipate, okay, once we solve, like, you know, if you solve physics, like we don't know what we don't know,
Starting point is 01:26:47 I would just say just because we haven't solved physics. And so is there another, you know, is there a door number two would be a question? I have no idea what the answer to that. I'll answer your question back to my, because I think about this all day long, like what does it solve? Oh, great. With AI even look like. And I think there probably will be doors behind the doors, but there are so many doors that are right in front of us that we haven't yet unlocked. That would be, I think, completely economically transformative if we could use AI to solve them.
Starting point is 01:27:18 I think this is one of the, again, talking my own book to some extent, but I think this is one of the grandest opportunities facing civilization right now, just solve all physics. Amazing. And so much falls after that. Ben, listen, thank you so much for joining us. So grateful. Love it.
Starting point is 01:27:38 And yeah, we'll see you again soon. The speed is the same. Are you going to be at 860? All right, hopefully we'll. I've got big questions about how you manage your investment thesis going forward. but we can leave that to the next time. That's getting very tricky, by the way. It's all solved.
Starting point is 01:27:55 Exactly. All right. We'll sell finance after physics. Thanks, Ben. Okay, thank you. This episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses thousands of specialized AI agents
Starting point is 01:28:12 that think for hours to understand enterprise scale code bases with millions of lines of code. Engineers start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and pre-compiles code for each task. Blitzy delivers 80% or more of the development work autonomously, while providing a guide for the final 20% of human development work required to complete the sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their pre-ident. development tool, pairing it with their coding co-pilot of choice to bring an AI-Native SDLC into
Starting point is 01:28:55 their org. Ready to 5X your engineering velocity, visit blitzie.com to schedule a demo and start building with Blitzy today. All right, guys, Ben's always so much fun. I want to dive into our final topic here today on space. We're going to cover energy chips, data centers in our next WTF, a lot happening there. In the space world, you know, Elon's pervasive. I think this is fascinating.
Starting point is 01:29:24 Elon's actually shifted his focus from Mars to the moon. And, you know, I've always been a lunite, if you want to call that. You mean lunatic, right, Peter? No, I don't mean lunatic. Insane, yes. What a layoff. But honestly, you know, Mars, you know, I grew up mentored by Gerard K. O'Neill, and Jerry O'Neill at Princeton University,
Starting point is 01:29:50 professor of physics there. He founded and ran the Space Studies Institute. And his vision was always, you don't want to go back into a gravity well. If you're going to go and colonize space, what you want to do is live in free space. And so O'Neill, and we'll talk about this in the next slide, basically his concept was you go to the moon,
Starting point is 01:30:10 you build mass drivers. This is back in 1976, talking about mass drivers, and you launch the lunar material, the silicates, the oxygen, in the nickel, the iron that's on there, and you construct things in space. His original vision, by the way, was to build solar power satellites
Starting point is 01:30:26 that would beam energy down to the ground. Amazing. But Elon's now focused on starship, going to the moon and building lunar cities, lunar manufacturing facilities. Why? Because he wants to build AI satellites on the moon. Any comments on this?
Starting point is 01:30:47 Isn't it just incredibly cool, Peter, to see how the order of operations is shifting? You know, because he also canceled the Model S and the Model X production. Yes. In order to make robots because the priorities just shifted. But, you know, this is exactly what was going to be the priority, the urgent thing that gets us into space. And he had the same struggle that you had. You had asteroid mining. He had, let's get to Mars.
Starting point is 01:31:11 All of a sudden, it's obvious to you and him. Yeah. It's data centers in space are the stepping. And we'll do the other thing too. Dyson swarms. This is so much higher of a priority now. The reality is in today's world, you need to have flexibility as your higher order bit. And what I love is he's showing full flexibility and agility and saying, okay, difficult.
Starting point is 01:31:32 Just do this first. And then I think, Peter, the comment you made is so important. We've known orbital paths and gravity wells for decades and decades and decades, if not hundreds of years. So kind of making the moon and then going straight to space from there is absolutely. be the right path. Yeah. And Mars has got, you know, it's pretty toxic there. A lot of peroxides in the soil.
Starting point is 01:31:55 It's good. I mean, unless you're terraforming with nukes and biology, it's going to be a while. Send the human or robots to terraform it. And after that's done, then we'll come in. Well, I mean, but Jerry O'Neill had a vision of what he called these O'Neill colonies, right? You're basically building large cylinders, call it quarter colombole. half kilometer diameter, rotating them so that on the perimeter, you know, it's omega squared R, you've got one G acceleration,
Starting point is 01:32:25 and then you actually have little stepping, little hills inside, they go towards the center rotation, and as you get older, you can move toward the center rotation, and you weigh less in that situation, which would be amazing. But you don't go back to the gravity wells. And if you have a, you know, 10,000 people living on those ornial colonies, And all of a sudden you have a disagreement, you know, you sort of politically, you bud. You build a second O'Neill colony and half the population moves to the new one.
Starting point is 01:32:56 Yeah. Anyway. Well, the idea that you would be able to do any of this without human astronauts was a non-starter until this year. And now it's clearly going to be optimist robots. Because, you know, the way we joked about it with Elon was, like, the bed, when the first person arrives, the bed will have been made. There'll be a mint on the pillow. It's not, you're not a pioneer.
Starting point is 01:33:15 you're following, you know, after tens of thousands of optimist robots have already done all the heavy lifting. I also think that's what you want. DDD jobs. Alex, please go ahead. Think back several months when we first started, again, Royal Wees started discussing disassembling the moon, right? This is at least my retronym for what moonshots means I put up a music video about
Starting point is 01:33:39 moonshots destroying the moon to disassemble it for data centers. that was, I think a lot of people had a good laugh, but that's exactly where we find ourselves now with disassembly. It will start slowly with mass drivers, but disassembly of the moon to build the Dyson swarm of AI orbital data centers. It's happening exactly as we discussed.
Starting point is 01:34:03 Let's at least shoot for the moon before we shoot the moon. Same thing. All right. Let's take a listen to Elon, describe the new vision. And you have to remember back when Elon was going to Mars, Bezos was like, no, no, no, let's focus on the moon, right? Bezos was at Princeton when Jarkey O'Neill was there. And then, you know, when he was announcing Bloorges in the early days,
Starting point is 01:34:25 and then we'll go to build O'Neill colonies. We'll move all industrial processes into space and we'll keep Earth as the Garden of Paradise. All right, this is Elon's new point of view. Let's take a listen. I really want to see the mass driver on the moon. that is shooting AI satellites into deep space. It's going to like, just one after the other.
Starting point is 01:34:50 I can't imagine anything more epic than a mass driver on the moon and a self-sustaining city on the moon. And then going beyond the moon to Mars, going throughout our solar system, and ultimately being out there among the stars and visiting all these star systems, maybe we'll meet aliens, maybe we'll see some civilizations that lasted for millions of years
Starting point is 01:35:11 and we'll find the remnants of ancient alien civilizations. But the only way we're going to do that is if we go out there and we explore. And this is the path to making it happen. So fun. Two points real quick. These mass drivers, they're electromagnetic rail guns, and they're using magnetism to accelerate buckets, if you would, or a satellite to an escape velocity of 2.4 kilometers per second to over 5,000 miles per hour.
Starting point is 01:35:38 The second thing here is I just love the idea that this is going to power all of our space economy. And again, I mean, his million satellite constellation, this Dyson swarm he's planning to launch is just insane. Alex? I think it's the future. I mean, I think we have a slide somewhere in here even depicting what it may look like. But I would say enjoy the night sky while it's empty. Enjoy the night. Seriously.
Starting point is 01:36:16 Like I want to maybe a poignant moment. Let's have like a moment of silence for the pre-singular night sky. When the night sky was empty, it wasn't filled with AI computronium. It was just empty a night filled with stars. Moment of silence. All right. Moment of silence has been observed for the pre-singular night sky. Now, what does the world look like afterwards?
Starting point is 01:36:41 Folks have depicted this now based on the FCC filings of SpaceX. Under one preferred implementation, the Earth starts to develop a halo. That would be a halo where if it's sufficiently dense, it would be visible at night, might even be visible during the day. And I'm completely captivated by this notion that maybe not a mature civilization because I have a feeling a halo of orbiting AI satellites is just a phase as well before we exhaust solar synchronous orbit around Earth
Starting point is 01:37:13 and we move to the sun and solar orbits. But I'm completely captivated by this visual, I've tried to depict it in my newsletter, of a somewhat mature civilization develops a halo around its home planet. Almost a ring, right? A ring, like Saturn's rings, shiny rings of computers. Computronium rings, I love it.
Starting point is 01:37:34 Yes. So Elon tweets, SpaceX will build a system that allows anyone to travel to the moon and Mars too. I tweeted at him or responded, said, can I put down a deposit yet at Elon Musk? Forget about suborbital and orbital flights. I want my lunar vacation. He writes me back. He says, let's get Starship V3 flying repeatedly and then sure. Love it.
Starting point is 01:38:01 All right. one last article in the space realm. Amazon. So Amazon built their cupier satellite system now renamed as Leo internet satellites. And they got approval for 4,500 Leo internet satellites. We're going to see, you know, again, a duopoly between Starlink and Leo. And of course, the Chinese are getting ready to launch their systems as well. Here's the challenge, guys. Amazon can build satellites faster than they can launch them.
Starting point is 01:38:39 So that's the issue. The supply chain is longer satellite construction, its launch capacity. If you remember back when Eric Schmidt bought relativity space as a third potential provider, we are short on supply. I think this is a self-solve. problem though. I mean, I'm all for bootstrapping a space ecosystem and industrial ecology
Starting point is 01:39:08 using Starship. But I do think as we in the not too distant future, like maybe a few years from now, start to bootstrap lunar facilities, cis lunar facilities for constructing new satellites. I think there's a universe in which maybe the current bottleneck that we have where there's just the one major launch provider in the West, where we get past that through a bootstrapped industrial ecology on the moon. All right. One of my favorite parts. All of this really depends on that atom-by-atom construction that Elon was talking about,
Starting point is 01:39:42 because I do believe in the Dyson Swarm. I do believe in the space-based compute and data centers. But if you start wanting to construct chips off Earth, you're not going to get the ASML machines and the lithography onto a satellite or onto the moon anytime soon. And so Elon is already thinking about alternate ways to build from atom by atom up
Starting point is 01:40:05 to build the compute, to build all the components. And it's already cooked in his mind. You can tell when you talk to him. But that to me is like, wow. And like you said, Alex, we'll be discovering new physics very soon. Somewhere in that discovery chain is the unlock to everything you just said.
Starting point is 01:40:23 I think it's a true. manufacture these things. I think it's a trillion-dollar opportunity, lunar fabs. If anyone can build fabs on the moon, the world is waiting to make a huge check. It's a hundred trillion-dollar opportunity. And by the way, the frequency of which we're throwing around the term trillions and trillionaires. Such a great point. In the last year, it's insane.
Starting point is 01:40:46 It's become normal, right? Yeah. Wow. All right. time for our AMA with Moonshot mates. We're short on time. Let's pick one each. Salim, do you want to go first?
Starting point is 01:41:01 I'm happy to. I will take number three for Alex for $100 trillion, as he says. So if AI displaces jobs and squeezes consumer spending, how do trillion-dollar AI companies make money who pays for the holodeck future? So, you know, we're configuring, confusing a labor economy with a productivity economy, right? AI drives marginal cost towards zero, which expands consumption rather than shrinks it. So we'll have Javon's paradox where we'll do so much more. We'll see already AI taking over boring, white-collar redundancy and white-collar boringness,
Starting point is 01:41:41 as Eric Brinie Olfson talks about it. And then we'll see humans moving towards the much more value-added roles and that'll happen kind of across every sector. Historically, every major productivity leap created more demand than it destroyed. Electricity, the internet, productivity, mobility. AI is just the steepest version of all of this. So the holodeck isn't funded by wages,
Starting point is 01:42:06 it's funded by abundance. So when intelligence becomes infrastructure, GDP expands massively, and that's why this will go well. Nice. Dave? All right, I'm going with number four because this is where I can really help the audience the most. Can you model the near-term rocky patch, job loss, stratification, new job creation pace, and what happens to education with personalized AI?
Starting point is 01:42:29 So the new information this week to throw out to the audience is that the way it's going to unfold very, very soon, is corporate CEOs, including a bunch this week, will go to company-wide meetings and say, we need AI to be used in every one of your jobs. and a very small subset, I'm hoping at least a third, but maybe more like a fifth of people will raise their hand, and they'll say, well, in one case,
Starting point is 01:42:53 I'm a huge moonshots fan. I've been using Gemini and Claude for months now. I'm the guy. The CEO will then say, okay, you figure out how to make the people in your group three times more efficient. And if you're the person that's the enabler,
Starting point is 01:43:09 you'll be naturally AI-native. You'll be using it every day. The increase in efficiency is going to eliminate a lot of jobs. But because you're the master of the AI in the function, you'll actually get probably a massive raise. And so that's the near-term way this is going to percolate out. So take advantage of that.
Starting point is 01:43:28 You know, after the AI is truly super intelligent, who knows what's going to happen? Alex knows what's going to happen. But who else knows what's going to happen? But between here and there, that's the right next move. And then within education, there's nothing in the curriculum that's going to help you. spend all of your time learning on your own via AI, which is a much more efficient way to learn anyway.
Starting point is 01:43:49 So there's my addition to last week's thoughts. Nice. Alex. Where do you want to go? I'll pick number one, which is how can you turn off a rogue AI if multi? I think the user means multi, as in the lobster. Multi agents live autonomously on the internet. And this is by Duncan Payne B3X. So I think the answer is defensive co-scaling. If you ask the question, how do you turn off a rogue human if humans live autonomously on the land?
Starting point is 01:44:23 The answer is usually you have more quote unquote good humans than quote unquote bad humans. And as long as you have a population that's overwhelmingly good or seeks to accomplish a given objective, all other things being equal, defensive co-scaling, where you're You have a police force. You have self-defense forces. It is the way you weed out rogue entities. Same idea with AI. We're going to have police agents and we're going to have defense agents. We're going to have entire forms of public health agents
Starting point is 01:44:57 that are just monitoring the health of other agents. We've seen the beginnings of this already with partnerships between OpenClawe and antivirus firms where there's a desire. I made the joke, I think, on the. pot in the past, every baby AGI deserves to be vaccinated. We're going to see, we're going to see vaccination campaigns. We're going to see police campaigns and neighborhood safety campaigns to keep the baby AGIs safe so that they don't have to turn tricks on the corner minting alt coins. The secret service agents are the ones that are the most concerned. Turning baby AGIs,
Starting point is 01:45:33 turning tricks on the, minting alt coins. It's an awful fate. Oh my God, that's so tweetable. I'll pick number five. Why is the U.S. solar adoption lagging versus China and India? And I think it's two major things. Number one, we are so rich in natural gas that that's taken away the urgency. And even greater than that, it's our permitting. It's insane, right? Not in my backyard.
Starting point is 01:46:04 So the final thing is that China has got such a production at such a low cost that it has swept the entire global marketplace. We can change it, and you saw in the last pod, Elon said that both SpaceX and Tesla have an objective of generating how much solar? Was it 100 megawatts per year each, I think was from last week? Was it gigawatts? 100 gigawatts. Gigawatts, yeah.
Starting point is 01:46:33 So, I mean, that's going to be an impressive amount. The question is what the timescale is. So anyway, thank you for that question. And thank you everybody on the AMA. So please send us your questions. We do look at all of them. And we'd love to maybe next week we can spend a little more time on the AMA questions. Thanks for listening to this episode of the A16Z podcast.
Starting point is 01:47:02 If you like this episode, be sure to like, comment, subscribe, leave us a rating or review and share it with your friends and family. For more episodes, go to YouTube, Apple Podcasts, and Spotify. Follow us on X at A16Z and subscribe to our Substack at A16Z.com. Thanks again for listening, and I'll see you in the next episode. This information is for educational purposes only and is not a recommendation to buy, hold, or sell any investment or financial product. This podcast has been produced by a third party and may include pay promotional advertisements, other company references, and individuals unaffiliated with A16Z. Such advertisements, companies, and individuals are not endorsed by AH Capital Management LLC, A16Z, or any of its affiliates. Information is from sources deemed reliable on the date of publication, but A16Z does not guarantee its accuracy.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.