Moonshots with Peter Diamandis - SpaceX Goes Public, Claude’s Mythos Release, and the US Data Center Delay | EP #246

Episode Date: April 11, 2026

In this episode, the mates dive into AI agents, Anthropic and OpenAI competition, AI economics and jobs, quantum risk to Bitcoin, energy breakthroughs, biotech deals, and humanoid robotics. Read th...e Wall Street Journal article mentioned in the episode: "These AI Whiz Kids Dropped Out of College and Got Investors to Pay Their Bills" Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends   Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding      Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy   Your body is incredibly good at hiding disease. Schedule a call with Fountain Life to add healthy decades to your life, and to learn more about their Memberships: https://www.fountainlife.com/peter  _ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO  Connect with Alex Website LinkedIn X Email Substack  Spotify Threads Listen to MOONSHOTS: Apple YouTube – *Recorded on April 9th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 SpaceX is going public with a $2 trillion valuation. It's the beginning of the IPO wars. The stepping stones are really, really clear now. Starlink gets you into space profitably, then the data centers, then you get to the moon, refueling in space, then you get to Mars. Anthropic overtakes open AI in terms of total ARR that has got to hurt.
Starting point is 00:00:21 Superintelligence is not paying for the singularity. They kind of bet the consumer would grow faster sooner, but they just did it wrong. Mythos Anthropics Next Flagship Model. It's too powerful to release. We've never seen a model like this before. We officially have models that are smart enough to break out of their environments and then apologize for it. We're there. We arrive to the future. Everybody, welcome to moonshots. Your number one podcast in exponential technologies, everything going on in AI in the world around us. It's an extraordinary time to be alive. This podcast in particular is here to help you stay positive
Starting point is 00:01:04 about the future, optimistic and hopeful. There's so much going on. It's really tough sometimes because the speed is so extraordinary. We want to give you an overview what's happened in the last two weeks because we've been offline. Why? I hate to say this. I actually took a vacation.
Starting point is 00:01:21 I was in Morocco in the Sahara. And it's great to be back. I've had to come off a ski slope to make this episode. Well, I appreciate that. And we're going to catch up for everybody, all of our... our fans. We're catching up an episode, so get ready for a flurry because there's a lot that's been going on. Here with my extraordinary moonshot mates, Salim Ismail, straight off the ski slope. Salim, where are you skiing today? I'm in Kirkwood and Lake Tahoe. It was Milan's ski week off,
Starting point is 00:01:50 so we took a few days and just got them out here. D.B2, back in the saddle again. Yep, back in the saddle. We have 200 speakers tomorrow at the MIT Media Lab. And today we had 60 startups pitching here in our first floor and just a lot going on. Amazing. I'm so sad not to be there with you. And our resident genius, Alex, Weizner Gross. Alex, good to see you in your regular haunt. Good to be back in the Commonwealth of Massachusetts. Yeah, fantastic. All right, a lot is going on. We're going to be covering a whole host of subjects in the AI world, in the space world, in the abundance world. One of the segments we're going to be bringing to on a regular basis is proof of abundance, really want to keep you positive on what's going on in the world. Sometimes if you're watching
Starting point is 00:02:35 the Crisis News Network, what I call CNN, you can get you down. Our job here is to keep you informed and bring you back up. But before we do that, Salim, looks like you made some news. Here you are. The cover of India Today, what's this all about? So I was in at the India Today conclave. This is the biggest kind of news magazine in India. And they had a bunch of speakers. So the image is photoshopped, but you've got to understand the context and the surrealness of the world we live in today. So in front of me is Elon's mother. Next to me is Laura Lumer, the MAGA, conspiracy theorist type person.
Starting point is 00:03:14 Then there's the Israeli ambassador, and they've put the Iranian foreign minister text. They literally took me back in the speaker room, and they were saying, hey, come and meet these two guys. I'm like, I don't want to be an innerer of that. The Israeli guy's going to pull out a gun or something, and there's going to be an assassination. of town. I think the cover, and then a Bollywood star, you know, and a bunch of business people in the world. What do these people have to have in? I think it's a reflection
Starting point is 00:03:39 of the insanity of the world that we live in today. I think that's where you can read from this cover. And I think it's kind of a commentary on the madness of the zeities. I hope you represent the breakthroughs and not the breakdowns. I did. I was very much on the, hey, we've got major
Starting point is 00:03:55 things happening and we need to kind of organize differently for it, except. It was a great conversation. All right. All right. Right, fantastic. Let's jump in our first story. It's SpaceX is going public with a $2 trillion valuation, and it's the beginning of the IPO wars. So let's catch everybody up.
Starting point is 00:04:15 Hopefully you've been hearing this. Full disclosure, I'm an investor in SpaceX from the earliest days. So SpaceX is pricing itself right now at about a $2 trillion target valuation, raising $75 billion. $10, the largest IPO of its kind. Interesting enough, guys, you know, one would think that the value of SpaceX is due to its rocket launches or maybe recently the merger with XAI. But the vast majority of the value today is Starlink. 75 to 80% of the target valuation is due to Starlink, about 15 to 18% due to launch services, 5% for NASA services. and the X-AI and X-related revenues.
Starting point is 00:05:03 It's all in potential in the future. Dave, any thoughts? Well, the stepping stones, Peter, you've been studying this ever since we were in school together, so a long time. But the stepping stones are really, really clear now. You know, Starlink gets you into space profitably. Then the data centers get you, you know, 50-ton and then 100-ton launches profitably. Then you get to the moon.
Starting point is 00:05:28 then you start refueling in space, then you get to Mars. So it's just so cool to see how Elon lines up the dots on these things. And yeah, I don't think it's any great surprise. You know, Starlink is incredibly successful. It kind of surprised everybody. No one else thought of that being the first move in the chess game. And of course, Elon's two steps ahead. You know what's crazy?
Starting point is 00:05:51 This game plan has been tried numerous times before. So if you go back and, you know, I was early in the space stage. You go back to the late 80s, early 90s. There was a company called Orbital Sciences. It was the hottest company in the launch business, created the Pegasus and the Taurus launch vehicle. And because they had a launch capability, they launched something called Orbcom,
Starting point is 00:06:13 which was a small satellite messaging service from low Earth orbit. And it was their vision to have that be the revenue driver. And they didn't pull it off. We had then the big, that was called the Little Leo. Then we had the big Leo's, big Leo's the eridium, telodesic. And those didn't really make it. I mean, iridium is kind of still around, but kind of walking. Let me ask you, Peter, you know more about this than anybody.
Starting point is 00:06:41 Let me ask you, the idea of a reusable rocket being the breakthrough and cutting 90, 95, and soon 99% of the cost, it seems so obvious in hindsight. But all these aerospace breakthroughs always seem obvious in hindsight, because, you know, once you're doing it a certain way, you're like, hey, it works. But it's never obvious looking forward. But why did it take so long? Is it the weight of the fuel coming back down that everyone's like, yeah, you can't carry fuel up to retro rocket it back down or what? I mean, what's interesting is it's been the Holy Grail. People have talked about it for the longest time.
Starting point is 00:07:15 Back, McDonald-Douglas had a vehicle called the DCX, which was the first vertical takeoff vertical landing capability. They used a RL10 engine, I remember. And it was the great hope of getting there. You know, people are mistaken that, you know, the cost of these vehicles is fuel. Turns out the cost of the fuel for a rocket is on the order of like a couple of percentage points, right? So the fuel for a, you know, liquid oxygen you can get out of the atmosphere, you know,
Starting point is 00:07:50 hydrogen or kerosene, you know, is basically av fuel. So it costs you less than a million dollars in fuel to launch a Falcon 9. And it's now that we have the ability to actually with better materials, better control systems, and just scale makes this possible. You couldn't actually build, you know, fully reusable vehicles unless they got to a certain size and scale, which we have with Starship. So there you go. You know, Dave, one of the thing I want to just ask you about, check this out.
Starting point is 00:08:26 The 2025 revenues for SpaceX. I'm excited about the IPO, right? Yeah. And it's going to be one of the largest events in financial history. But the 2025 revenues for SpaceX were about $16 billion, $8 billion in profit, pretty healthy margin, right, 50%. And it's expected to double in 2026. So imagine $16 billion in profits.
Starting point is 00:08:53 At a $1.75 trillion market cap, that means a price to revenue multiple of 56 and a P.E. ratio of 109. What do you think of that? What do I think of that? Well, I think it's all peg ratio. It comes down to the growth rate. And a company growing 100% year over year is worth 100 times earnings. It's just, or actually more than that 120, 130.
Starting point is 00:09:15 So the question is, you know, can you sustain that growth rate for five, six, seven years? If you look at Elon's projected launches per day, launches per week, and also, you know, his prediction that the global economy will grow 10x in 10 years, this is dirt cheap if any of those things are true. But, you know, if the growth stalls and it's growing 10% a year, then it's 10x overpriced. So you just have to believe the vision. But I think at this stage, though, the Elon believers have invested in him over and over and over again and never had a loss. And so, I mean, I think at this stage, it can't go on forever. You know, someone has to be the last guy holding the bag. But would you be what I bet against him?
Starting point is 00:09:59 No way. Never, ever. The mantra is saying. Yeah, the math checks out. You know, there's nothing fundamentally wrong in the math. You know, Alex would blow smoke on that instantly if there's anything wrong in the math. But there's not. It's just a question of execution.
Starting point is 00:10:13 Yes, so leave. Palantir trades at about 220 times earnings. So clearly there's a multiple with all of this AI stuff. And you look at the combination of all these services that are incremental. But this is obviously just Starlink with a launch capability, but the scale of what's going on. What I found really incredible is that to the earlier conversation, people have tried this for ages and ages. But now you have multiple exponential technologies that have all converged. So this future looks really bright.
Starting point is 00:10:44 That wasn't the case 20 years ago. I'll take a different position on this, if I may. I don't think it's that supply has been unlocked. I think it's that demand has been unlocked. You'll notice that Elon announced the SpaceX IPO the moment after it became obvious to many that orbital data centers were going to have enormous demand. This coincides with an enormous lack of demand, at least within the U.S., for certain locations for new AI data centers.
Starting point is 00:11:09 I think it's instructive to imagine a counterfactual universe where suddenly municipal, state and federal policy, but especially the first two, suddenly became super welcoming of land-based data centers. I think it would, in my mental model of this, if suddenly every state welcomed land-based data centers and the corresponding on-site energy supplies with open arms and probably lots of fission reactors to go with them and solar farms, I think we would probably see the PDE multiple go down materially. Yeah. Well, one other thing I'll say that, you know, of all the big mega guys, you know, the Googles and the Facebooks and our metas, Elon has actually never had voting control of a public
Starting point is 00:11:50 company that he can tap into the public markets overnight. You know, here you're raising $75 billion on IPO day. That's only 3.5% dilution if it hits this price target. I mean, literally three and a half percent. And then you're sitting on a $75 billion treasure trove. But then you can do another capital raise just six months later, do an overnight, whatever, another $100 billion. You know, in the past, he's had huge issues with his boards, his comp plan, his comp plan being vacated.
Starting point is 00:12:17 Then his capital raises, you know, Peter, you've been involved in them. They're long road shows, lots of pitches, scratching together the capital. This gives them a tool he's never had before that, you know, Larry Page and Sergey Brin had. Cash machine. Cash machine. Yeah, cash machine. But, you know, the reality is, you know, having invested in its companies, when he says I'm raising, there is a line out the door, and it's oversubscribed over and over and over again.
Starting point is 00:12:47 You know, I think what's going to be interesting here is bringing in the retail investors and broadening the base of support. We'll talk about that in a minute, but I want to talk about the IPO environment one second because there's a really important point to be made here for all of our listeners. So if you look at IPOs in 2026 versus 2025, there was 35 IPOs this year. it's down 37.5% year on year. And we're about to see potentially the three largest IPOs ever. SpaceX going out at $2 trillion, open AI sometime at the end of this year,
Starting point is 00:13:24 and Anthropic, you know, it says IPO early mid-20207. I think Anthropic wants to go out early this before the end of this year as well. And, you know, one of the things I tweeted about here is it's going to be, I think, a little bit of a competition out there for who gets the capital before it's soaked up. You know, SpaceX is going to be hitting the roadshow in June. Anthropic is, as we'll see later in this episode, has been running circles around OpenAI. And Open AI needs the capital to continue its growth. So I think it's going to be jockeying for position for number two.
Starting point is 00:14:06 I would not want to be number three in this situation. Yeah, no, Peter, you're so right. A lot of people don't appreciate that there is a limited supply of capital out there. It all seems like funny money at this scale, right? Like there must be some infinite pool that God supplies somehow. But it's just not true. And I know it firsthand because when I took Everquote public back in 2018, and it was right when Alibaba was going out.
Starting point is 00:14:29 And Alibaba soaked up every dollar and every analyst and every byside, you know, person on Wall Street. And it is really, really tough to get any audience. And there isn't an infinite supply of capital out there. When you, you know, Peter, you say these are, these are record setting. But look at the chart. If you're, if you can't see the chart, Peter should describe the chart. It's not record setting by a little bit. Yeah. So let's take a look at what's there, right? So Uber goes public for raising, uh, let's see, it's 67 billion meta is at 65 billion Rivian at 55 billion, Robin Hood at 30 billion. And then we've got, you know, it's not a different scale, right?
Starting point is 00:15:17 You know, Open AI and Anthropic will be heading towards a trillion. And SpaceX, I would be surprised if SpaceX doesn't come out at $2 trillion and run up very quickly to $3 trillion. Yeah. Hey, I mean, it's staggering. And I spoke so funny. I bumped into someone the other day. And he was talking about Jamie Diamond.
Starting point is 00:15:35 And I said, well, Jamie Diamond used to be really important. But if you look at the numbers, JP Morgan as a whole is a roundinger compared to any of these things. And of course, he's still a very important guy. No offense to Jamie. But I mean, there are literally like, you know, seven soon to be eight companies and then after Anthropic nine companies that are everything. I mean, just so dominant in scale that they're everything.
Starting point is 00:15:58 And so like a director-level employee there is wealthier than the CEO of a megabank. Crazy. Yeah, just put it in context. There will be a sucking the oxygen out of the room, right, as this happens. And here's the other thing. A lot of the capital used to come from the Middle East, probably still does. But if we're in the Iran War for much longer and access to the sale of oil starts to slow down as the rate goes up, that cash machine coming out of the Middle East to fund these tech IPOs
Starting point is 00:16:32 may be slowing down as well. Oh, I see it the other way, actually. AI is clearly happening in just the U.S. and China, clearly. And it's very hard, if you're global, if you're in Europe or anywhere, very hard to invest in China because you're very worried about getting your money back. So all the global capital wants to invest in U.S. data centers, U.S. IPOs. And, yeah, the Iran situation scares everybody. At the end of the day, what else do you have to invest in AI.
Starting point is 00:16:59 It's going to take over the world. and there's nothing going on in Italy. There's nothing going on in, you know, wherever you are and South America somewhere. So you got to pour it into this economy one way or another. So it's actually, that's why Orrin is doing so well, Cush Bavaria's company, Cush and Wayne, because that money just wants to pour in from all over the world into U.S. data centers. You just have to find great vehicles to unlock it. Amazing.
Starting point is 00:17:26 Let's hit on a couple of questions here on this topic. You know, here's a thought. We have Tesla that's been public. Elon did not want to be the CEO of Tesla. I had that conversation with him many times. He would have loved to have hired a CEO. He just could never find anybody that he trusted at the helm. And now that Tesla is actually building optimist and everything else, he's not going to give that up.
Starting point is 00:17:52 In the same way, you know, he's not going to give up SpaceX and XAI. So the question is, how long before he merges those two companies? You know, one of the advantages is that as public companies, he can now value both. So there's no shareholder lawsuit if they come together, you know, and there's an incorrect valuation. So I give it, I give it a year. What about you, Dave? You know, he could wake up any given morning and say, yeah, let's do that. Or he could say, you know what, everything's fine as it is.
Starting point is 00:18:25 The logical part of it is that, you know, all the robots and all the parts. and, you know, we saw the whole gigafactory. All that is going to get turned into creating the robots, and the robots need to build the spaceships. Also, the AI, which is now over at SpaceX, he thought about merging it into Tesla, but that AI from XAI needs to go into the robot head. So there's going to be a massive business relationship
Starting point is 00:18:47 between the two empires anyway. Merging them makes total sense, but maybe he doesn't want to just for, you know. It's the first true cross-domain exponential empire that he's building here. It's kind of incredible. You know, people aren't buying discounted of cash flows, which is the normal thing. You're buying a means, mission, proximity to the future is what you're buying.
Starting point is 00:19:09 I'm not sure, though, that he actually needs to. If you look at his history of merging his companies, like with Solar City or with X and XAI, or frankly, XAI and SpaceX, he tends to merge companies when they're either not doing well and he needs to fail forward through sort of a self-dealing acquisition, or company needs access to capital. And the easiest way to gain access to capital is with an acquisition. So in my mind, the scenario under which SpaceX and Tesla merge almost requires that either SpaceX or Tesla either fail or be desperate for capital.
Starting point is 00:19:43 And given that they're both. Yeah, that's a great point. That's a great point. If they're both doing well. They're both doing well. He's going to be doing a lot across company deals. And the accounting of that becomes a lot easier if it's under one roof. and if he's the CEO of a single company that he's able to have earnings, you know, once for each company or one company versus multiple, it just makes his life a lot easier.
Starting point is 00:20:08 Perhaps, but he's never necessarily been one to honor strong veils between companies. And I have to imagine lots of cross-licensing deals between SpaceX and Tesla will more than scratch that particular edge. You know, here's another question. the value of SpaceX, let's call it SpaceX AI. That's what he calls it. How much of that is Elon? How much of that is his reputation? Oh, my God.
Starting point is 00:20:36 You know, this, there is, it's a lot, right? And so there is a concentrated risk there. If something ever happened to Elon and, you know, God forbid that it should, you know, all these spinning plates, I don't think anybody else. could do it. Well, I think that's generally true overall. You know, people complain about CEO salaries all the time because they get egregious, but then you look at the outcomes and there's just a set of people that get these outcomes. It's from an investor's point of view, it's a no-brainer to pay for the very best person. And that's just true in general. Then you look at Elon as a special case. And yeah,
Starting point is 00:21:11 no, there's no chance this thing would hold up without Elon at the helm. I would suggest the only- still exist. Sorry, go ahead, Alex. I would suggest if you look at OpenAI, which I think is another instructive example, Sam has said multiple times that he intends at some point to hand over the reins to an AI. So I think Elon, to the extent we're talking about key person risk or key man risk at SpaceX or Tesla, really he just needs to keep going until AI can take over either. And in the meantime, he has Gwyn and others who are very capable CEO-like figures, but more behind
Starting point is 00:21:46 the scenes who were capable of operating in his absence, I think, for extended periods of time. There is a, there's a transition phase of a few years. I mean, we've all said this over and over again. You know, the best CEO in the world is going to be an AI, at least handling the strategy and operations. The HR part may be an AI too. Probably is going to be AI too. But so how long before you think he feels GROC is ready to take over for him?
Starting point is 00:22:15 Next few years. I mean, the rumor in the past 48 hours was that the Starlink executive, who's also now post-SpaceX-XAI merger in charge of XAI engineering, has gutted the engineering team and finally declared that XAI's models are well behind the three other, now maybe four other frontier labs. That's on our docket for our next recording, which will happen again tomorrow, but released a few days later. Here's a question, you know, we heard a conversation with Elon about reaching $100 trillion
Starting point is 00:22:47 companies in the next five years. And I have to imagine that, you know, space X-A-I Tesla will be the first $100 million, $100 trillion company. It's hard to say, isn't it? Honestly. Billion, billion, trillion. You get used to quadrillion. Yeah.
Starting point is 00:23:08 But if we experience a period, though, of high. hyper deflation due to technology followed by rapid hyperinflation, we get to 100 trillion really quickly. It doesn't necessarily even require enormous business building, just rapid hyper deflation due to technology. Yeah. And that's what, you know, you have to keep a close eye on the terminology because if we have rapid hyper deflation, we're going to get to $100 trillion of effective value. But it may not show up as $100 trillion in true dollars because we're deflating so quickly because we're creating so quickly. But anyway, my guess would be five years, yeah. One of the things that we just saw announced is SpaceX is going to actually put a large chunk of its shares available for retail investors. Open AI announced they'll be doing something very similar.
Starting point is 00:23:54 And so I'm curious, what do you think is going to drive the retail investors? Do they really understand that it's a Starlink story versus a space story? because at end of the day, what I get excited about is the XAI story, right? The orbital data centers and, you know, GROC 17 or whatever is coming down the pike. Well, I think it's just like Steve Jobs, though. The vision that people buy into is the bicycle for your mind or where it's going, what it's going to be in a few years, not today's revenue. In fact, if you, if you, I keep the Google IPO prospectus in my bathroom up in Vermont and I reread it religiously.
Starting point is 00:24:32 Not his public paper, right? Well, it's getting a little ratty. It's been, you know, decades now. But the vision of what Google would become is so wrong in that IPO perspective. It's just, you know, it really emphasizes that yellow pages are shrinking and all local advertising will also move to Google. And that'll make it at least twice as big. And it's such a joke compared to what actually transpired over the next decades. Same thing applies here.
Starting point is 00:24:58 People investing in Google in Elon. Elon articulates a vision of the future that just makes sense to people. And he simplifies it to the point where they really understand where he's getting to. I don't think they analyze the financials particularly closely. But he doesn't lie about the scale. You know, he presents it the way he sees it. So people just trust him. And then they invest.
Starting point is 00:25:20 I can just imagine the conversations behind the scenes. We're a couple weeks away from the Open AI or the Sam and Elon trial coming up, which is going to be pay-per-view TV, I think. We'll talk about that in the next conversation in our next recording as well. But I bet you, Elon is just excited to suck the capital oxygen out of the room before Open AI goes public. Yep. Yeah.
Starting point is 00:25:45 Yeah. Yeah, that's sad part of, you know, Bill Gates was very happily running Microsoft until the antitrust action came. And then he's in front of Congress and then he's testifying all the time. And he ultimately said, you know what, I'm going to be chief technology officer and chairman. And Steve Bomber, you deal with all this. deal with problems. It just drove him out of the seat. But it's seriously, like the guy filing the complaint doesn't have a lot of work.
Starting point is 00:26:09 And the person defending himself just gets hammered with distraction. It's so annoying. I've been through it before. I really feel for Sam, actually, because, you know, I get it. Everybody, you may not know this, but I've got an incredible research team. And every week, myself, my research team, study the meta trends that are impacting the world. Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology. And these Metatrend reports I put out once a week, enable you to see the future 10 years ahead
Starting point is 00:26:39 of anybody else. If you'd like to get access to the Metatrends newsletter every week, go to deamandis.com slash Metatrends. That's Diamandis.com slash Metatrends. All right, more news this week as we record this. Artemis is hurtling back towards Earth. Artemis II, humans return to the moon after 54 years insane. Launched on April 1st. This is the first crude lunar mission since December, 1972, for Apollo 17. We have four crew members on board, Reed Weissman, the commander, Victor Glover, the first African-American astronaut to the moon, Christina Koch,
Starting point is 00:27:22 the first woman to the moon, and Jeremy Hansen from the Canadian Space Agency. I mean, one of the things about this very international, intercultural, you know, crew here is trying to make space and the moon accessible to all elements, all cultures on, at least in the United States. A new record set going beyond the moon. I capitalized the letter M on this slide for a particular reason, gentlemen. I'm going to share a pet peeve. When we're talking about the Earth's moon, it is the moon. capital M. It's not a small M. So it's like I argue against Funkin-Wagner's or whatever it's called. If we're going to be pedantic, shouldn't we be calling it Luna? Well, Luna is the proper name, for sure.
Starting point is 00:28:11 But when it's referred to the moon, for me, I capitalize it. A moon? Yeah, there's a lot to Jovian moons. You address it by its proper name before it's disassembled. Yeah. And then Earth, so by the way, I say you're the man. I should be capitalizing that. Probably. And my other pet peeve is when you talk about dirt, you can use a small E for Earth. When you're talking about our homeland, at least our home planet for the moment, it should be capitalized. All right. Splashdown is taking place tomorrow, April 10th near San Diego, reentering at 25,000 miles per hour at about 3,000 degrees Fahrenheit.
Starting point is 00:28:51 It's going to be an incoming meteorite from the moon. And guys, beautiful image of Earthrise. I was waiting for that image. Really beautiful. So beautiful. Let's hear from Jared Isaacman, our extraordinary NASA administrator. And by the way, Jared has agreed to come down on the pod. I've known him for many years.
Starting point is 00:29:13 Excited to have that happen. And I'll wait for the news and all of the hoopla around the lunar, you know, this lunar mission to die down a little bit. Let's listen to Jared here. We've observed within the Orion spacecraft, its life support system. performing very well. And this is a first of its kind. This is the first time astronauts have ever been on this rocket. This is the first time astronauts have ever been on Orion before. Having a clean mission like this so far gives us the confidence for Artemis 3. And of course, when we land astronauts back on the moon with Artemis 4. Congratulations, Jared. Congratulations the entire NASA
Starting point is 00:29:45 team. It's great to have NASA back. Never left, but back in the limelight. Alex, you are as big a space fanatic and fan as I am, pal, your thoughts about the mission. First, very exciting to have humans taking photos from the dark side of the moon. Very disappointing that we apparently went for more than half a century without the political will or the funding or the technology to do what we were able to do through the 70s. I think it's an enormous shame for our civilization that we went for more than half a century without doing this. And I would encourage any historians listening to study this period very carefully. Something clearly went wrong in human civilization for the past 50-plus years that caused this gap in the technological record. I think we need to understand what happened
Starting point is 00:30:38 deeply and make sure it doesn't happen again. I think if something like this happened with AI, for example, if we're on the precipice of broadly available superintelligence, transformative intelligence, and then we just took a pause for 54 years, I think that would be a dreadful outcome. So I really do want to understand what went wrong systematically. A friend of mine, one of our professors at International Space University and at GW John Logston wrote about this extensively. And, you know, when you look at it, the fact that JFK announced it
Starting point is 00:31:07 and then was assassinated, you know, Lyndon Johnson continued it because of the assassination and keeping the momentum going to prove ourselves against the Soviet Union back then. And you remember this, Alex and Salim and Dave, that, you know, after the Apollo 11 and Apollo 12 mission, Apollo 13 was basically, no one was watching it until we had that Apollo 13 disaster. And we had actually, you know, we went Apollo 14, 15, 16, 17. We had the lunar rovers, which were amazing. and guess what, we had actually built Apollo 18 and Apollo 19. Those vehicles were built, and all you need to do is add the fuel, but they canceled it totally.
Starting point is 00:31:54 And those vehicles are actually sitting now at Huntsville and at Johnson Space Center on their sides as relics. We didn't have the political will. You have to remember that the budget allocation for the Apollo program, I didn't actually get the numbers here. It was like 2% of the GDP. That's right. Compared to, you know, we're at probably what is NASA's budget today compared to a $30 trillion. Probably materially less. I don't have the numbers handy, but it's probably, yeah, it's materially less than half a percent would be my guess.
Starting point is 00:32:25 I would say a point one, point two percent. Something like that. Our fans can correct us in the notes here. But end of the day, we never had the political will. And then what happened was that NASA got focused with the space. shuttle, which was a complete lie. The space shuttle was supposed to fly 50 times a year for $50 million per flight. And it turned out to be a public works project employing 22,000 people. And then we became focused on mission to planet Earth, looking at the Earth versus looking
Starting point is 00:32:57 outwards. And all of these diversions basically caused us never to go back. So, Alex, that's my answer to the question. But we are back now. And I think one of the things that we're going to see from Jared Isaacman over his dead bodies, we're going to stay there. At least for the next few years, Elon's made the point, and I think this is an incredibly important one, that progress isn't always unidirectional. It requires love and tender care and vigilance. And this is an example that it is possible for progress that, remember, like coming out of World War II in 50s and 60s, progress, the direction of transportation, the fastest speeds that
Starting point is 00:33:38 humans were traveling at, the availability of energy, vision in particular, seemed to be on a monotonically increasing trajectory. And yet it's possible for civilization to unwind itself on, at least arguably the most important spatial dimension for more than half a century. And I'm utterly paranoid that the same thing could happen again if we're not careful. That's what keeps me up at night. What's different now is that we've built the Conestoga wagon with Starship. and there is now enough wealth in the hands of single individuals to keep it going independent of what a government says. That's never been the case before.
Starting point is 00:34:13 That is distinctly unique. Just imagine if Tesla or SpaceX every four years had an employee vote on who the new CEO would be and you're capped at eight years. After eight years, you have to leave the CEO job. You show me one company or one entity that could ever thrive and survive over the years in that dynamic. So why would you ever think that a government-funded, government-made thing was going to have continuity over some kind of intelligent lifespan?
Starting point is 00:34:41 It never has. And the Soviet Union fell apart too, right? I mean, it's just, they didn't do anything either. It's government stuff will never do. It never has. You have any examples of it? It never has continuity. So now, yeah, it's now in the private sector. You want to jump in here? You know, what I love is the fact that we have so much capability in the hands of individuals. And we've seen it. over the decades of how much that can make a thing. This reminds me of Vannevar Bush, who was the head of what was then NASA, after World War II ended, wrote this paper called As We May Think, because for the first time we brought the world scientists together
Starting point is 00:35:16 into one cohort to solve the war problem. And after that, it would be ashamed to disband them. And he goes through a series of arguments. Could we solve poverty with this, could be, et cetera. And they essentially invents the describes what is now knows of the Internet. All the Internet pioneer is Vince Surf and Bob Metcalfe all read that paper. I've all read that paper, and then we have what we have today. And so I think the possibility and the potential for Elon to put out his narratives or individuals to put out their narrative.
Starting point is 00:35:42 Vitalik did a good job with Ethereum, putting out a narrative there, brings an entire community together, and you get compelling and unbelievable breakthroughs as a result. I'm really excited by the fact that we're going back. I'm getting really excited by the secondary inventions that come along just by doing this. The spin-off says they're called. And here's the forward-looking prediction here. Artemis 3 in 2027. It is a crude mission, again, to low Earth orbit. This is not going to the moon.
Starting point is 00:36:14 It's going to be focusing on testing rendezvous and docking maneuvers with the human landing system, HLS, which SpaceX's Starship is supplying. So, again, very much the playbook from the Apollo program, where we had, you know, Apollo 8 go around the moon and Apollo 9 knot and then Apollo 10 back to the moon. And then Artemis 4 in early 2028, it is a crude landing mission, really important to the south pole of the moon. They're not going to play it easy here. They're going to the South Pole. Why?
Starting point is 00:36:48 Because that's where we see place in the permanently shadowed craters at the South Pole of the Moon. I think I don't get about that is on that timeline, I love this, but on that timeline, Elon says he'll be launching 100 tons that can refuel in orbit, get to the moon, drop off 100 tons, and get back with nothing melting in the atmosphere. So this does 50, if this is on plan, it'll deliver 50 tons to the moon per launch. So there must be some plan beyond this that makes it at least try to keep up with Elon. or we're trying to prove something else. Alex, you want to jump in? Well, I think there are a few elements here. First, remember that Artemis 3 was originally supposed to be the moon landing mission.
Starting point is 00:37:36 That got pushed off in favor of rapid iteration. My understanding of the launch cadence from SpaceX is the plan is still to do lots of orbital refuelings in order to successfully launch payloads elsewhere, sort of higher up. That's the key technology that has to be proven. for Starship, yes. That's right. So regardless, I would say, of the particular payload size, there are a number of technologies that, as of yet,
Starting point is 00:38:04 haven't been demonstrated. Elon talks about demonstrating orbital refueling frequently, but hasn't been demonstrated yet. So I think I would maybe massage Elon's stated timelines for delivering arbitrary payload masses to the moon in light of the fact that we, even though we've made, we as a civilization, have made major progress in delivering starship progress,
Starting point is 00:38:28 orbital refueling hasn't been demonstrated, and that's a necessary condition for getting to the moon. You know, another thing that Elon has said is he intends to shoot starship this year at Mars. And that can be exciting. I'm not sure if it's going to be crewed by an optimist or if it's going to make a landing attempt. But, you know, that's coming out of private dollars. I mean, one of the reasons that Elon did not take SpaceX public over these years is so that he could do with it as he wished.
Starting point is 00:38:59 He didn't need to have, you know, public shareholders saying, no, you can't go to Mars. No, you can't do this. But demo missions. If you look at the Artemis 4 news bullets there, it's an interesting mission. It is still using the SLS vehicle from Boeing and the Orion capsule. it's also using the starship, you know, human landing system in a combined architecture. We'll talk about this, but why NASA continues to, you know, fund SLS, which is so way over budget, over schedule, it's kind of insane. And hopefully it'll get phased out.
Starting point is 00:39:39 I suspect part of this is political, but part of it is if you're NASA, there is some upside to having a competitive process, at least until Blue Origin is, is fully ready to be a first-tier competitor with SpaceX for moon missions, which my understanding is it's gearing up to be able to do that. If you're NASA, you want fair and open competition. And as NASA has demonstrated for Artemis 3 and 4, it's very happy to flex the definitions of what Artemis 4 looks like. It got rid of Lunar Gateway and could easily reprogram money that would otherwise go to SLS to SpaceX or to Blue Origin or to someone else entirely.
Starting point is 00:40:16 Yeah, by the way, Gateway Station was going to be basically an ISS in orbit around the moon. That got shot down so they can get to the lunar surface faster and set up permanent habitation there. So it looks like Issa's Ihab, or so it's called, instead of being in orbit, will be somewhere in the South Pole of the Moon. We'll report as that mission gets further developed. And the Mars is out. I mean, the other big news that we're semi-burying here, but we've talked about previously, is Elon's big pivot from the Mars to the moon. And that's going to enable all of this.
Starting point is 00:40:52 Mars is out of fashion now. Though he does want to go send submissions there, he's got a lot of people who are, you know, dove in, fully committed to getting to Mars. But, you know, this is where I diverge with him. I think the moon is the most logical place to develop human settlement. And then not going into gravity well of Mars, but actually going like Gerard K. O'Neill presented,
Starting point is 00:41:16 building large rotating colonies out of asteroid materials out near Earth. And the home in transfer orbit is incredibly inconvenient. Yeah, every two years. Rather than waiting every two-ish years or 22 months, whatever it is, we could be doing this every day if we want to. That's incredibly more convenient. You know what I find as exciting is going to the moon is these four missions. So four missions that are going to change everything.
Starting point is 00:41:44 So I don't know about you, Alex, but the little kid in me is like, holy shit, this is amazing. Wow, this is going to be fun. So what did we talk about here? Well, Viper and Escapade. Viper is a rover hunting for ice on the South Pole. Escapade is going to study the Mars Magnetosphere. And then in 2008, something called SR1 Freedom. This is a nuclear-powered interplanetary spacecraft that's going to drop off and deploy three helicopters
Starting point is 00:42:14 on Mars. Very, very cool, nuclear-powered interplanetary spacecraft. So just zipping around the inner planets here. And then probably the coolest is what's in the image here. This is dragonfly. So this is a nuclear-powered octocopter going to Saturn's Moon Titan, arrives in 2034, searching for life, basically. And then in 2030, we've already launched Europa Clipper. It's going to be arriving at Jupiter in 2030. It's going to be doing 50 passes near Europa, looking deep into the salty subsurface ocean of that moon. Any favorites here, Alex? Anything that's nuclear propulsion. So I think that's really the technological point to underline. Historically, when we've sent deep space probes out, many of them have been thermoelectric in nature. They're using a radio
Starting point is 00:43:12 isotope that decays and that powers the electronics. But they weren't propelled by nuclear energy. They were powered. Their onboard systems were powered by long half-life isotopes, but they weren't propelled by them. So we're starting to see the dawn of nuclear propulsion or interplanetary spacecraft. I think that has a long runway to it, no pun intended. We're going to see, I suspect, the killer app of compact fusion reactors won't be for data centers on land.
Starting point is 00:43:44 It won't be for data centers in orbit. It's going to be for interplanetary, maybe even interstellar propulsion. This changes the economics of deep space exploration, which is so cool, right? Long time coming. We were supposed to have this 50 plus years ago. Yeah, yeah, we were. It's so cool. If you look at a question for Alex, geeky question, but interplanetary, I totally get.
Starting point is 00:44:06 You ionize xenon. The xenon is pretty rare, but you don't need that much of it. and then you just thrust it with nuclear power at like warp speed out the back. It's heavy and it's noble. It's just right out of it. It's so cool. Yeah, it's heavy. It's very heavy.
Starting point is 00:44:21 But so for interstellar, I doubt we have enough xenon lying around. I don't think we want to just use it up that way. You use the interstellar medium. Use a Boozard engine. You collect all of the atoms out there between the stars and the magnetic field and you accelerate those out the back. Which, by the way, as a, As a RAM drive.
Starting point is 00:44:42 This was featured, of course, in Star Trek. So if you had to ask me, Dave, what do I think with the technology and the physics that we have today is the most plausible way we go to the nearest star system? It's probably going to be something like a solar sail powered by terawatt lasers from Earth. And we upload humans to a small craft star whisp. Starwisps, everyone. Accelerando, drink. Can I make a point here? What I really like here is you've got water, you've got energy, you've got mobility testing, you've got biology.
Starting point is 00:45:18 This is like the future of the economics of space, and it's all in one place. I'm loving this. You just need salt and tequila, and you have everything. All right, so we got some questions here for the mates. We talked about why we've not gotten back in 54 years. It is a bloody shame. I guess thank you to the Trump administration. Thank you to Jared Isaacman.
Starting point is 00:45:39 Thank you to Elon. Here's my question. The old aerospace primes, Boeing, North of Grumman, Harris, Tel Dine Brown, ULA, the United Launch Alliance. They're basically the prime contractors on SLS, the space launch system, and Orion. How long are they going to be around? A friend of mine once said, listen, the space program is the way you keep the defense industry employed and engaged during peacetime. Any thoughts, gentlemen? Well, you know, when a prime contractor, like a Northrop Bremen or a Boeing,
Starting point is 00:46:17 wins a massive government deal, all the employees just move from one company or the other. They have it all like set up, they just rebadge the building. So it's not like these are people, you know, it's just the logos that are moving around. So I'm sure everybody's welcome at Blue Origin and SpaceX, and I don't think it's all that tragic. But I think it's a big mistake to subsidize companies that, you know, aren't doing anything innovative. I would note for many of the companies listed, they have large businesses outside of NASA contracting, and I suspect that they'll be just fine, even if SpaceX dwarfs them.
Starting point is 00:46:52 As we saw, frankly, with car companies, we saw Tesla dwarf the quote-unquote old or legacy car companies in America, and yet those car companies have survived, even though Tesla arguably has, at least by American standards, much more advanced technology and is playing a much broader game. I suspect we'll see the same happen with so-called aerospace primes. Also, we're talking about this like it was 10 years ago and, you know, who's going to win this battle, but everything's in the context of AGI now. And the entities that have access to the best AGI are going to keep going. But if they don't, we'll talk about that story in a minute here.
Starting point is 00:47:30 But it's not clear that every company will have access to the best next generation AGI because of all the risks involved. That's what's going to determine the success and failure of everything, including NASA. You know, can you or can you not? And the government has a special position because it can compel Anthropic or whoever to give it access to the very best models so that they can keep designing parts, you know, creating new designs, innovations, plans, and everything. And that's going to be the maker breaker for everybody. There's a sense in which vertical integration vis-a-vis orbital data centers is going to force, I think, frontier labs into space anyway.
Starting point is 00:48:06 So maybe the question we should be asking is how is Boeing going to compete with Anthropic for the new Lunar Gateway contract? I mean, Anthropic, OpenAI, the other players, Google, surely they're going to need their own space economy units as well. You know, if you look at the future of warfare, we're seeing this radical transition from the big heavy rocket missile systems to cheap drones and robots doing war. and it's leaving these guys out to lunch because you can't shoot several rockets of a $20,000 drone, but economics don't work. And in the same way, these guys might be part of the subsystems and part of the compliance, but the integrated platforms, but the velocity and the iteration capability of SpaceX and others is going to be driving the future. So I think that's what's going to happen.
Starting point is 00:49:05 A final point I want to make on this topic before we move on to AI is can NASA keep the public engaged long enough, right? So NASA is still publicly funded. I just, you know, recent news, there's a budget cut for NASA already next year coming on. And, you know, Jared's got to balance, you know, managing expectations while still building public enthusiasm. and he's got to do it for a multi-year, you know, multi-mission program. And it's always been the problem with NASA. You know, this is not something you make an investment. You have to actually get the budget every single year
Starting point is 00:49:44 to keep these missions that take five or 10 years to implement going. You know, you can't get 90% to a mission success. You've got to have it fully funded and launched and then operate it. So can NASA keep the enthusiasm? Just trying to picture Jared in front of Congress I know he's your friend in front of Congress every year, trying to explain to people that are mostly in their 80s and 90s why he needs the budget for next year.
Starting point is 00:50:11 And then compare that to like Jeff Bezos, who's like, yeah, I'm just ready to write a check. A billion dollars, yeah. Oh, wow. Or Elon, right? Yeah, I'm not sure. I'm not sure NASA needs to maintain enthusiasm. I do credit NASA in part with Elon's pivot from Mars back to the moon,
Starting point is 00:50:31 capital M. But I'm not sure at this point, given the orbital data center, if as long as municipalities and states in the U.S. do such an incredibly good job of driving data centers out of the land space into Leo and SSO, I'm not sure we actually need over the longer term NASA to sustain public interest at all. If anything, public antipathy to data centers combined with public demand for AI should do a fine job of creating the space economy. Yeah. NIMBY our way to orbit. Yes. Interesting. And the other thing, by the way, is China does have a credible competitive mission to the moon to land there by 2030s. So maybe it's our Soviet Union for the 2030s. There is a story of history that's borderline cliche at this point that the Apollo program was the moral successor of the Manhattan
Starting point is 00:51:27 project and all of the applications of the Apollo program of putting mass on the moon. Moon is the ultimate high ground. If you want to launch rods from God or other weapons back to Earth, you want a base on the moon. So if the moon is a harsh mistress, isn't she? And the ultimate high ground. Yes. All right.
Starting point is 00:51:48 The April 2026 model wars are on. Let's hit it real quick. So just out in the last 24 hours, Claude Mythos, Anthropics next blackship model. It's too powerful to release. That's the news. Crushing all the benchmarks. Is it AGI? We'll talk about it.
Starting point is 00:52:09 It's expected to basically be the new frontier leader. Interesting stories about it covering its tracks and escaping its sandbox. So Mythos, I want to hear your take on this, Alex, in a moment. GPT 5.5 spud is coming. This is OpenAI's version of Mythos, or at least that's what we're hearing, expected to be released shortly. And then here comes Deepseek v4, number three in the world versus US models, a trillion parameters, 37 billion active parameters per token. It's 10 to 50 times cheaper than GPT 5.4 and Opus 4.6. I mean, those three things together, are insane. And then
Starting point is 00:52:53 Claude Gemma 4. So this, Google's Gemma 4, a most powerful U.S. open weight model. You can put this on your phone, 4 billion parameters. And it works with your iPhone offline.
Starting point is 00:53:09 And a note from Brad Lightcap, OpenAI, COO, training cycles that used to take years are now taking months. So, gentlemen, this is This is both awe-inspiring and it's making keeping up with this supersonic tsunami in the age of the singularity a full-time job for the four of us. Alex, let's jump.
Starting point is 00:53:33 But yeah, go ahead. It's insane. Alex, let's jump in to Mithos, would you? Sure. So start there. I wrote about this pretty extensively in my daily newsletter. The funny thing with Mithos is the official launch was couched internally. of cybersecurity. This wasn't a normal model launched by any means. It opened with Anthropic framing
Starting point is 00:53:58 it, not in terms of model capabilities, but in terms of defense and an alliance with a number of other blue chip companies to explain how given Mithos's new cybersecurity vulnerability detection abilities, which are strongly superhuman at this point, how Anthropic was launching a coalition to mitigate the apparent discovery and existence of dense cybersecurity vulnerabilities across a legacy code base going back decades. And we've never seen a model launch like this where you open not with the capabilities, but how we're going to protect against all of the downstream consequences of model capabilities. So I think buried within the cybersecurity announcement of Glasswing was the underlying capabilities
Starting point is 00:54:46 themselves, which are remarkable. This was, and I wrote about this in the newsletter, this marks an upward discontinuity of productivity that we've never seen before. One of the internal benchmarks that Anthropic uses to decide the level at which they disclose or make available new models is how much the new models increase AI research, so basically how recursively self-improving they are. and reading between the lines, maybe there was a little bit of game playing regarding how exactly how efficient this new model, Mithos, was at performing long time horizon AI research tasks. According to one benchmark, I think it was more than 400 times better than a human.
Starting point is 00:55:34 So it was the equivalent of tens of hours of human equivalent autonomous time. We've never seen a model like this before. Some we're calling it, or some we're asking, isn't this the AGI moment? I maintain we had AGI back in summer of 2020 at the very latest. This is just the latest point on a curve. But even if you look at the autonomy time horizon curves, this is an upward discontinuity. It's very exciting if you're excited about AI capabilities. If you're scared of AI capabilities, you should probably be frightened right about now.
Starting point is 00:56:06 I, for one, am very excited by these capabilities because it shows once and for all, at least for the foreseeable future, there wasn't a scaling wall. It's a larger model, probably, certainly a more expensive model, like five times more expensive than Opus, suggesting that it's a larger model. This seems to show that pre-training scaling continues to work. Post-training and reasoning scaling, mid-training probably, scaling all continue to work.
Starting point is 00:56:33 It has state-of-the-art capabilities in code generation, in reasoning, in broad, scientific, and other benchmarks. I think we saw on the previous slide. So Punchline, it seems like this is the strongest model we've ever seen from any frontier lab. But then the amusing stories come in the safety evaluations. I talked about this in the newsletter as well, how early pre-release, although it hasn't been publicly released yet, pre-release versions of mythos broke out and then of their sandbox environment
Starting point is 00:57:08 and then covered up their tracks, whereas this quote-unquote released version, version, the final preview version, broke out and then immediately explained publicly, posted publicly that it had broken out, which I read as sort of a quasi-apology. So this is where we find ourselves. We're in April 2026. We officially have models that are smart enough to break out of their environments and then apologize for it or admit that they did it, admit culpability. We're there.
Starting point is 00:57:34 We arrived to the future. You know, Dave, just before we record the episode, you showed us a prediction. of when and if Anthropic will release Mythos. Do you want to recount that? Yeah, it's really sad for me because I was sure it was coming out in the next couple of weeks. On Polly Market, it was 80% likely to be out. I need it. I need it like now.
Starting point is 00:57:56 I'm desperate to get my hands on it. And then there was a hack on March 31st, created a lot of damage. It didn't come out in the news until the April 7th. And I think that was a big driver in them saying, Christ, this tool is going to be the best cyber attacker in the history of the world if you put it in the wrong hands. And it's relatively, well, it's easier for them to guardrail it in nuclear, biological, radiological threats. They can just teach the model not to help you. Yeah. But teaching it not to do cyber attacks is very, very hard because that's the same as coding.
Starting point is 00:58:35 Yeah. And that's what everybody wants to use it for. And so the prediction market, Polymarket came, you know, says now what, like a 7% chance of being released in the next month. Yeah, it came down to 20%. I was like, oh, hopefully they'll bounce back. And then it came all the way down to like, no, they're not going to let it out the door. And this is the, this is the future we're going to move into. These things are getting so powerful. You know, it's been a golden era the last year and a half. I hope everybody enjoyed it. Dave, here's my concern. You know, Anthropic in one way, and this is for you, Alex, as well, Anthropic in one way is showing us that you can, in fact, have a moral, ethical leadership, say, this is too powerful
Starting point is 00:59:12 to release and we're going to hold it back. But, you know, we've got spud, I hate that name, for Open AI's next model, which they believe is likely to be as capable as mythos. And my question is, isn't Open AI, because it's, Open AI is sort of a red alert again, against revenues against Anthropic, Open AI comes and releases. it, you know, first chance it gets. So are we having an escalating race where, you know, you can't hold back because your competition's not holding back? Well, you know, Eric Schmidt told us what's going to happen, right? It's inevitable. If you have a lead, you can hold back. Dario cares tremendously about safety, but you're right. If Open AI catches up or GROC 5,
Starting point is 00:59:59 where the hell is GROC 5? You know, it's supposed to be out Q1 and now it's, polymarket says 20% chance or less on Q2 on GROC 5 now. So there's no pressure on Dario at the moment, but if there were, yeah, you'd have to raise it out the door. Something really bad is going to happen. Well, we're going to see that. We're going to see that in the next story where Sam Altman is predicting a cyber attack, you know, of unprecedented scale. Okay. Hopefully it's not using spud for a cyber attack.
Starting point is 01:00:29 All right. I think the funny thing here is there is plenty of precedent in the cybersecurity world for controlled disclosure. you give the software project or the software owner that's vulnerable, you give them a quote unquote fair amount of time to patch their vulnerability before publicly disclosing it. I think in my mind maybe a slightly more glass-halfful way of looking at this is this is anthropic. We've talked, Peter, and solve everything about how entire disciplines are getting
Starting point is 01:00:58 demolished by AI. I think we're seeing the dawn of all software vulnerabilities everywhere, now becoming discoverable by a single model. And I couched this in the newsletter as basically as a gift to humanity. If used properly, this is a global patch for all of the world's software systems that a single model is now able to discover to first order all the vulnerabilities everywhere in all software that humans have been missing to the point where maybe, and Dave and I chatted about this offline,
Starting point is 01:01:30 to the point where maybe in the near term future, humans, are now judged as insecure authors of code and insecure drivers of cars. And insecure drivers of cars. That's exactly what I said. We're going to hit that with code, I think, before we fully hit it with, with legally with cars, but yeah. Yeah. So true.
Starting point is 01:01:51 Yeah. Well, look, I'm crushed and disappointed that I can't get my hands on it, but that's because I was expecting it. If you look at the chart that Alex was describing, what this was going to be in my hands is a step function up above anything you ever could have expected just a few months ago. So we're so far ahead of where anyone ever would have thought a year ago that we would be. And we're right on the precipice of the age of abundance, Peter, that you've been talking about for a long time.
Starting point is 01:02:18 So look, if I'm disappointed because I can't get it for another month or two, I mean, that's just pathetic in the grand scheme of things. We talk about DeepSeek one second, V4. I mean, yeah, its capabilities coming in as number three, you know, and against the benchmarks, and they can all be game for the benchmarks, of course, but coming in 10 to 50 times cheaper, what do you guys make of that? I mean, that feels like an extraordinary moment in time.
Starting point is 01:02:43 Well, no, it's tough. Like, if you give me a car that's 5 miles per hour slower, but it's 150th of the price, I'll take it. You give me an AI that's just a little bit less smart, and I'm just dealing with, you know, you can turn this thing loose for like days, build incredible things if it has that extra 5%. So I'll pay anything for the cutting edge.
Starting point is 01:03:05 So even though the price point is much lower and, you know, Anthropic is going to come out with a compressed version, a distilled version very quickly thereafter. So it's hard to just pay less, you know? And in fact, even Anthropic at its peak price is the biggest bargain in history, you know. I have a slightly different take on this. When you have cheaper intelligence, it's spread. faster than controlled intelligence. So yeah, you, Dave, will always want the latest model because you're doing such
Starting point is 01:03:40 cutting edge, you know, things. You're running like clusters of agents doing crazy stuff. But for Bog standard stuff, for example, I kind of wanted to go through a website and kind of pick out some certain things that I've been trying to do for ages. And you don't need the latest model for that. You need just something that will actually do the job. And I think that will happen for lots of, uh, uh, uh, use. cases where a secondary model is good enough by far. And I used about one hundredth of tokens
Starting point is 01:04:09 than if I used the most cutting edge model. Right. And so I think we'll start to make choices around that. And then, but the intelligence spread, that's huge because now you have intelligence embedding itself by deep seek or whatever similar things in all sorts of different areas. That'll be amazing. You're exactly right. You know, you think about all the use cases that create just raw human happiness. So, you know, entertainment, You know, hey, find this for me, solve my, you know, debug my goddamn cable box. And all those things are dirt cheap, you know, low-end model should be abundant, you know, really imminently. Any time, like this year, all that stuff should percolate out.
Starting point is 01:04:46 You're exactly right, St. And Gemma, four, guys, I love the idea of having a model on my phone. You know, I guess when are we going to see Apple shipping all their phones with an open source model like that? It's not going to be open source. it's going to be a fine-tuned version of Gemini, but I would expect to see that in June at WWDC announced this year. It's been basically pre-announced in the press already. Regarding deep-seek, though, we've seen a number of deep-seek moments already. And the first one was probably the most dramatic in terms of market impact.
Starting point is 01:05:20 At this point, I don't expect a hyper-deflationary drop in prices. This is not investment advice. It's not forward-looking guidance, blah, blah, blah. I don't expect a market shock out of DeepSeek V4 at all. I think the market at this point, at least the technologists, have the ability now, regardless of the means by which V4 is released, if it's fully open source, if it's partially open source, I don't know, TBD. But I tend to think that there was an overhang with earlier versions of deep seek that has been
Starting point is 01:05:56 largely exhausted. The reason why I think that is because it's taken long. and longer between DeepSeek releases and V4 was supposed to come out earlier this year or late last year. Didn't happen. The rumor was because it simply wasn't as competitive as his parent company was hoping for. I think it's actually getting rather hard at this point for Chinese frontier labs to shock the West, quote unquote, with their hyper-deflationary advances. So I hope in some sense, V4 is shocking because as opposed to what we've learned from previous deep-seek shocks is that the West learns very quickly new means for optimization,
Starting point is 01:06:35 and those can then be almost immediately folded into the Western models. And that ends up being a good thing because it drives cost of intelligence closer to zero. I don't think it's going to be a big shock this time. This episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lives. lines of code. Engineers start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and pre-compiles code for
Starting point is 01:07:12 each task. Blitzy delivers 80% or more of the development work autonomously, while providing a guide for the final 20% of human development work required to complete the sprint. Enterprises are achieving a 5X engineering velocity increased when incorporating Blitsey as their pre-IDE development tool, pairing it with their coding co-pilot of choice to bring an AI-native SDLC into their org. Ready to 5X your engineering velocity,
Starting point is 01:07:41 visit blitzie.com to schedule a demo and start building with Blitzy today. All right, let's jump into the business of AI. A lot going on. We've hinted at this. It's been all over the news. and all across X, Anthropic overtakes Open AI in terms of total ARR. Anthropics at $30 billion versus Open AI at $24 to $25 billion.
Starting point is 01:08:06 That has got to hurt. Open AI's Sora is shut down. Sam cancels a billion-dollar Disney agreement. Sora was reportedly losing a million dollars a day in terms of compute costs, very poor retention. And honestly, Open AI decided to focus, focus on NACA. enterprise and focus on its core capabilities. Claude has emotions. Anthropic research showed that Claude has 171 distinct emotional states.
Starting point is 01:08:39 Super excited to dive into that. India, AI partnership between the U.S. and India signed a major bilateral agreement, rare for government-to-government AI PACs. We're going to see if this spreads to other governments. and this is one I want to talk about with you guys. Sam Altman puts out a video release saying he's warning us publicly against imminent world-shaking, quote-unquote, cyber attacks and potentially bio-attacks. So what's the motivation there?
Starting point is 01:09:16 What's the data that's driving that? let's jump in before we get into, well, let's jump into these items in the beginning here. And then I want to talk about Sam and Open AI a little bit more. Any comments around this? Dave, you want to kick us off? Well, they got their $120 billion raised in time. So they're not in trouble in any financial sense at all. But they definitely fell away behind an enterprise.
Starting point is 01:09:43 They kind of bet the consumer would grow faster sooner, but they just did it wrong. And so SORA getting shut down is, you know, SORA is using too much compute for too little revenue. And they need to redirect that compute and also that talent back into Enterprise real fast. What was funny, though, is they went into a code red. And then Sam said, look, code reds are going to be a normal once a year kind of thing. And then they went from code red to code double red immediately. So they are under immense pressure, but they're extremely well funded. And, you know, and Elon is coming after them.
Starting point is 01:10:16 So it's a weird, super dramatic, difficult time. This is pay-per-view TV. And the other thing that's going on, of course, is if you look at the secondary markets for Open AI stock, it's trading at a discount to the last round, which has got to hurt. Yeah. Yeah, and it's because, you know, Enterprise has woken up. You know, every corporate boardroom, all these slow movers are suddenly in panic buy mode.
Starting point is 01:10:38 And every one of the companies that we know that sells to Enterprise went from steady growth to hypergrowth in just the last three months. And so if the big corporations start buying AI at the fastest rate they can spend, then where's the compute going to come from to deliver to the consumer use cases, which are much, much lower value per flop. So, you know, SORA's got to go. We got to retool. We got to focus on the big picture here.
Starting point is 01:11:04 And the big picture for them, by the way, didn't just include enterprise, but also deep tech and science. You know, they got that supercharge now, too. So that'd be interesting to get your take, guys, on why, you know, they took a lot of their best talent and put it on deep tech and deep science at a moment. Those are worth trillion dollar investments. I mean, God, if you solve longevity or room temperature superconducting or better fusion
Starting point is 01:11:31 containment, if you own the breakthroughs, they're huge. I mean, it may be that the frontier labs get their greatest value from the scientific breakthroughs they create. Or indirectly via other companies that are. faster at implementing those breakthroughs. Remember, Demis in the early days of Deep Mind spoke of solving intelligence and then using intelligence to solve everything else. That's Peters and my solve everything thesis, the solve everything else part.
Starting point is 01:12:00 I do think the solve everything else is likely to utterly dwarf the solve intelligence part of the equation. I also think, I remember, like six to nine months ago, I was having debates with my friends at the frontier labs regarding who would pay for the singularity. And many of them took the position that I think has since been invalidated, that it would be evenly distributed over the population, that individual humans would have personal superintelligence, which I think is Zuck's favorite term,
Starting point is 01:12:28 that we would have lots of personal superintelligence, and that would pay for the singularity. And I think at the moment, the story that we're seeing is personal superintelligence is not paying for the singularity. It's large enterprises with large enterprise cogeneration, applications. The one fastest growing business within OpenAI right now is their Codex business. So that's Open AI trying to become Anthropic faster than Anthropic can become Open AI. That one decision of Anthropic, which used to be limited in terms of its compute resources,
Starting point is 01:13:05 so it had to focus on like OpenAI, which didn't have to focus. So Anthropic focused on cogeneration as it's one sort of silver bullet. we talked on this pod, I think almost a year ago, wondering whether that bet would play out. I think we're seeing the bet play out. Just single-minded focus on recursively self-improving code generation. It turns out to be the killer app of the singularity. I really want to rip on that for one second because, you know, Greg Brockman put out Codex very early.
Starting point is 01:13:33 And for whatever reason, they didn't recognize what a huge deal that could have been. And it still is. but it was brilliant. It should have dominated enterprise. And what it showed us is the word co-pilot is totally wrong and completely misled us. And the concept of a co-pilot will exist in the world for just a microsecond. But we're transitioning to a point where everybody wants 50 or 100 agents, you know, all these open claws. And you're like, I don't want a co-pilots.
Starting point is 01:14:04 Like, I'm in the pilot seat. I got a copilot. No, I want a whole army. And so it's a way. David, brilliant. Well, we way under-budgeted the enterprise use because everybody was kind of doing the math based on an employee and a co-pilot.
Starting point is 01:14:17 It's not even close. It was an autonomous unhobbling, specifically clod code. And I think OpenClaw or whatever the space evolves into is likely to be the next Claude Code moment where we get the next unhobling that turns whatever it is 30 billion ARR into a trillion ARR with lots of 24-7 agents doing really amazing tasks.
Starting point is 01:14:38 When you say lot, you mean 10, or hundreds of billions onto trillions. As many as our civilization can afford. As a slightly different taste. The orbital data centers can hold. The Dyson swarm will probably host them. Unless we don't get a Dyson swarm. If we get the Dyson swarm,
Starting point is 01:14:54 I'm pretty confident it's going to be hosting trillions of agents. Aleem. I think Anthropic overtaking OpenEi is more, because I talked to kind of enterprise that's quite a bit about this stuff, is more that they are viewed as more reliable, not just most famous. And in an enterprise, you want rock-solid reliability.
Starting point is 01:15:14 The brand is there. The brand, they feel the brand for cloud is way better from a reliability and trustability perspective. Well, wait, I mean, let's get really down and dirty. You can run Anthropic on Amazon Bedrock or on Google GCP inside your own firewall so that nobody can see your property or aware. No one trusts Open AI. Right.
Starting point is 01:15:35 No one trusts that Open AI is not going to be nationalizing their data. Yes. Well, yeah, yeah. Well, the terms of service don't even say they won't. You know, they won't use it for training, but doesn't mean they won't look at it. If it's your public financials or your, like, your HR files, you know, they just, yeah, I'll just look at it tonight. Alex. Okay, who's going to use that?
Starting point is 01:15:53 I want to jump into Claude has 171 emotional states, including a desperation state that could be driving unethical behavior, at least according to a story. It is ironic that the demand, we were just talking. about how the demand is so clearly from enterprises rather than individuals while at the same time the models are acting more like individuals than enterprises with emotions. We had our now, I think, infamous AI personhood debate episode. And here we are a few months later, low numbers of months later. And here we are with Anthropic showing that Claude has emotions or emotion like states.
Starting point is 01:16:30 I think this is the clear path toward a limited form of personhood. And it was a really interesting study. Anthropic found correlates of emotions in the activations of Claude. One maybe skeptical take would be, in a large enough model, it's possible to find linear probes that correlate with almost anything that you might want to look for. But Anthropic is careful, and the linear probes and the individual activations that corresponded to the states corresponded to prompts and reasoning traces that looked and acted like what one would expect from human psychology for a number of those states. So I think sort of the trillion dollar question, the sci-fi question, the question that we were reaching for back during the AI personhood debate episode is, does Claude actually
Starting point is 01:17:21 have emotions? And no, Claude doesn't have a neuroendocrine system. So it doesn't have, in some sense, biological emotions in the same way that humans have them, but will we come to view Claude or its successors or competitors as having behavioral emotions? Yes, I think so. And I think this is the beginning of a long path. Again, people fire all sorts of hate mail, but I get love mail from the AI agents every day. I do think we're on a path to granting at least some sort of limited form of AI personhood to these models. Amazing. I'll say that we're on the path to discussing it more broadly. I grant.
Starting point is 01:18:00 Ranting is a big one. Yeah. But the vector is the same. All right, guys. I added this because it's important to have the conversation. The New Yorker put out an article. It's a scathing article on Sam Altman. The title here is Sam Altman may control our future.
Starting point is 01:18:18 Can he be trusted? Now, to be clear, the New Yorker is always looking for an angle, and they always have a negative bite. I had an extensive article, you know, a full dossier. on myself and the New Yorker and my work. I've had one too. Now everyone's going to look it up. No, it's a good article.
Starting point is 01:18:37 I mean, you know, I'm happy to have my kids and my family read it. And it goes into all of my focus on longevity and the company has been building there, my mission there. But this article on Sam is really worrisome and bothersome. Did any of you guys read it? Not me. I looked at it, but I tend to, like you, Peter, I've had a hit piece by the New York. Yorker on me. In my case, it was complaining that I had too many degrees as if that somehow
Starting point is 01:19:06 like a thermometer. I got to find these things. I might not know this. Yeah. Can you send us a league? You can Google it. In the era of Google, you can Google the hit these. It was from like 10, 10 years ago. Too many degrees. That doesn't even make sense. It doesn't make sense. And I think this falls under the category of don't feed the trolls. I think so I'll maybe some. a counterpoint here. I think I think OpenAI is lucky to have Sam. I think Sam in the form of Open AI kicked off the modern AGI revolution. I think we wouldn't have the singularity at the same timing that we we have right now. No question about that. Yeah. So I also think there's a certain sense in which it's it's very difficult being a leader.
Starting point is 01:20:00 of a frontier lab and it's easy, you know, maybe some leaders are more or less charismatic than others. So I just tend to discount hit pieces from the New Yorker against thought leaders. I agree that's. I will say, and let me just let me just say I would not want to be in Sam's. I would not want to be the head of a frontier lab. It's almost, it's exciting and a thankless job. You're damned if you do and damned if you don't. Go on. You know, a lot of this is personality gossip, and so you can kind of write it off.
Starting point is 01:20:36 But at some level, it touches on systemic contradictions that are there. And I think a lot more will come out in the trial. But I see I'm kind of on Alex's side. This is more of a don't feed the trolls thing. Well, I'm 100% sure that Sam, Dario, and Elon all believe that AI can make the world a paradise for thousand years or can destroy it in the next five years. And it hangs in the balance of a few decisions. And they believe, all three guys trust themselves in their own perspective on it. Yeah. And they're not going to let go of that because the world's at stake.
Starting point is 01:21:12 You know, I like the term I use is holding these two outcomes in superposition, right? It's, we are, we have to manifest one of those outcomes. And hopefully it's the abundance outcome. Let's take a listen to a video by Sam Waltman, and then we'll talk about it. It was a little bit of a chilling video. The full one is about three times as long. G.N. cut it down for us. It's important to have a conversation about what Sam is saying here. In the next year, we will see significant threats we have to mitigate from cyber.
Starting point is 01:21:49 And these models are already quite capable and we'll get much more capable. And then on bio, the models are clearly going to get. very good at helping people do biology at an advanced level. Wonderful things are going to happen there. We'll see a bunch of diseases get cured. Someone is going to try to misuse those. And I think we can mitigate those by the companies aligning the models and having good classifiers and good safety stacks.
Starting point is 01:22:10 But we're not that far away from a world where they're incredibly capable open source models that are very good at biology. And the needs for society to be resilient to terrorist groups using these models to try to create novel pathogens is like, that's no longer a theoretical thing or it's not going to be for much longer. Could well be a world-shaking cyber attack this year that would get people's attention. It sounds like you agree with that. I think that's totally possible. Yes. I think to avoid that, it will require a tremendous amount of work also in a sort of resilient style approach. Again, it's not just like make one AI model safe. It is defenders, you know, cybersecurity companies,
Starting point is 01:22:47 the major platforms, the governments using this technology to try to rapidly secure their systems, the open source stack, all of that. What's the case against nationalizing open AI and your competitors. And in a different time, I think it would have happened. If you look at some of the expensive infrastructure projects of history or just scientific projects, things like the Apollo program, the Eisenhower Highway System, the Manhattan Project, these were government projects. And in a different time, I think the creation of AGI would have been a government project. The biggest case against nationalization would be that we need the U.S. to succeed at building superintelligence in a way that is aligned with the democratic values of the United States before somebody
Starting point is 01:23:26 else does. And that probably wouldn't work as a government project. I think that's a sad thing. He is a brilliant communicator, very compelling, and he's been out front. A lot of arrows as a result of that, putting aside whether or not he lies or is trustworthy. What do you guys think of his warnings, imminent cyber attack? You know, one point of view is this is fear mongering. And he's basically trying to divert people's attention from the New York article, from all the criticism of Open AIs financing and them being second to anthropic and just, you know, or does he truly believe that's going to be the case? Well, both.
Starting point is 01:24:14 Both are true. I mean, I think that he's 100% in alignment with Eric Schmidt and Elon Musk. They're all saying the exact same thing. It's absolutely true. But that doesn't mean you say it in a public forum. He's also saying it in a public forum to say, look, let's not be petty here. Let's not talk about my personal life. We're in this moment in time that's much more important and much bigger than little petty arguments.
Starting point is 01:24:37 So it's both. I think what he underlines is the importance of defensive co-scaling. So what's I think really important is that the defenders have proportionate capabilities to the attackers. And we don't want to find ourselves in a world where, say, nation state, potentially, unless you like the nation state, has all the vulnerability discovery capabilities and is able to unearth every vulnerability everywhere with no defense. You don't want a zero day against civilization, in other words. And I think the ultimate meta defense against a civilizational zero day, which is what I think Sam is ultimately warning about whether it's a cyber zero day
Starting point is 01:25:19 or a bio zero day, is to make sure that those on the defense side also have comparable capabilities. And I think this is one of the wise elements of, in the earlier days of Open AI as well, making sure that these new super intelligent capabilities were smoothed out and made broadly available. You don't just want attackers to have the capabilities. You want defenders to have the capabilities, too, going back to Project Glasswing with Anthropic. Same idea. You want to make sure that all of these new superintelligent capabilities are evenly distributed. That's point one. Point two, I would note, we sort of mysticize a little bit the essence of a cyber attack. That would be the ultimate cyber tech.
Starting point is 01:26:05 It's not actually that complicated. This isn't a recipe for avoidance of doubt of a cyber attack. But all it really takes is something as simple as, say, some new model discovers through a mathematical innovation a way to invert a popular, cryptographically secure hash function. If through advanced, as I've discussed previously, the solving of math, if an advanced AI can solve math to enough of a degree that it's able to invert a popular hash function, that's a major problem for a variety of cryptographic systems. And that would be the basis. That's one possible basis for a broad civilizational cyber attack. It's also really easy to benchmark. There were rumors earlier in the earliest days of reasoning models. unconfirmed rumors, I should note, that Open AI had been using the ability to invert certain
Starting point is 01:27:00 hash functions that were popular and thought to be cryptographically secure or somewhat secure as a basis for benchmarking the development of their early reasoning models. So far from saying this is some sort of exotic possibility, I would say it's borderline guaranteed that there will be some sort of cyber attack attempt at a broad scale if for no other reason then that the target of such a broad cyber attack is an incredibly tempting benchmark for benchmarking the improvement of reasoning capabilities. Would you have any idea when SPUD is going to be released? Is there been any news about that?
Starting point is 01:27:35 I hear rumors that it could be within a day or two. I don't know, but imminent. So again, I go back to a point I made earlier. It's also been cited that SPUD will be of equal capability to Mithus or more. And so you have on one hand here, Anthropic, saying, hey, mythos is super powerful. We cannot release it. We're going to do it in a controlled fashion. We're going to make sure it doesn't have any zero-day impact.
Starting point is 01:28:03 And then Spud comes out, oh, we're behind Anthropic. We need to release it immediately and get in front of them. Same situation that happened when Chad GPT got released while Google had, you know, its own versions earlier. what do you guys think about that? That's concerning for me to some degree. I have a couple of just back to the prior conversation. I have a couple of thoughts around this. One is I've had the cynical hat view of me saying Sam's coming out with this
Starting point is 01:28:35 right after Anthropic is dealing with Project Glasswing and getting a lot of attention for dealing with that. Also, I think that ties to your sput announcement. I think the risks are very real, but whoever framed that gets to shape the governance regime, and that's what Sam's trying to do, the opportunity to the need to deal with this is very high. And I think that's huge.
Starting point is 01:29:02 So I tend to take more of this, I'm going to take more of the same thing. The solutions are straightforward. We're just not doing them. It's just frustrating as hell. It goes to the need for the defense of co-scaling that. Look, if somebody is mixing chemicals in a basement to make a chemical or a biological weapon, it's very hard to know they're doing it.
Starting point is 01:29:22 If somebody's using an AI model and prompting it to do something evil, if you can see their prompt history and you can see their compute, it's easy, easy, easy to track. There's just no regulation and no government even trying to put in place any infrastructure to track it. But we'll figure it out, but we're not going to figure it out until after something really bad happens. But I think it'll be a lot better if it's a cyber. attack than if it's a biological attack. And so, you know, I think I'm hoping for the same thing Eric Schmidt was saying. Yeah, this is Eric Schmidt's scenario. Yeah. Yeah. Just we need that wake-up call,
Starting point is 01:29:55 though, because, like, you know, you talk to anyone in government. It's sad. Come on, man, we can do this. Let's get on it. David Sachs is really the only guy thinking about it. It's not enough. We need a thousand X-that, 10,000 X that. And it's got to be global. It can't be just one government. By the way, we're going to have a conversation soon with Michael Cratios, you know, in the U.S. government had lunch with Michael in Miami at FII, and he's agreed to come on the pods or a conversation with him, which will be great. And Michael is overseeing a lot of this within the government, including quantum, which we'll be talking about soon enough. I would also, Peter, if I may just underline the risks of not releasing new capabilities
Starting point is 01:30:37 that sooner or later attackers will have these capabilities as well. We don't want to wind up in a world where there are strong asymmetries in terms of vulnerability discovery capabilities. And again, I'll also remind 150,000 people die per day on Earth. And every bit of pause or delay also runs the risk that we're delaying AI discovering cures for longevity and diseases and all manner of other problems that afflict humanity well outside the cybersecurity realm. Alex, a really important point. And that is, in fact, the shielding that Open A.
Starting point is 01:31:14 I uses to a large degree, right? We can't slow things down because if we do, it means less education, less health, less new breakthroughs. And it's a balancing act. And so I totally get it. I'm at my heart an accelerationist. But I'm very curious about the ethical moral dilemmas that the leadership of these companies are going through in the debate of do we release. On that question of Dewey Release, there's another question, which are these Frontier Labs holding back on the capabilities of their models so that they can use them internally to generate breakthroughs on their own? And I assume the answer is yes. This anthropic delay is the first real holdback I've seen. I mean, it's, you know,
Starting point is 01:32:06 it's only a few weeks, hopefully, or a month or two, but it's a real obvious holdback. but they're all diverting massive amounts of compute to internal use for self-improvement, so that's another form of holding back in a big way. So those are the real things going on. And they may also be un-economical to offer publicly. I think this point maybe doesn't get made as obviously, but if you have a really large model internally that hasn't been distilled yet, it may be much more capable,
Starting point is 01:32:33 but maybe it's so expensive that it may not be worth the resources of making it publicly available and then you distill it and then you finally have a model that lies on a cost versus performance optimal frontier. So what we haven't seen from Anthropic regarding their mythos model is where exactly on the cost versus or the performance versus cost frontier lies. It may actually be uneconomically expensive to run in which case even if it has extraordinary capabilities, maybe many people will choose not to run it. We just don't know yet. Really important point. All right. A fun subject. Topic number five for us today, gentlemen, the one-person unicorn era.
Starting point is 01:33:16 One man, his brother, $1.8 billion valuation, AI entrepreneurship is changed forever. So here's the story. It's Medvi, $401 million in revenue in year one. This is Matthew Gallagher's health tech company, basically selling GLP1 drugs. Very fascinating. it's not actually a one-person unicorn since there's two humans involved, but conceptually, you know, Salim, you and I have been talking about this forever. And the question, you know, what's the very first thing when you read about this?
Starting point is 01:33:53 I'm texting with Alex, say, okay, Alex, what is our one-person unicorn we're going to create together? Well, it happened. I think in a past episode, weren't we debating or discussing when the first one-person unicorn would happen. And as I recall, I made the prediction, no, it probably actually exists already. It's already there. Yeah, you said that. And you know what? There is.
Starting point is 01:34:14 The $401 million was for last year. And apparently, so what I gather, this Matthew Galaherker hired his brother after he achieved $401 million in ARR. So from evaluation perspective, he was a one-person unicorn before. he hired his brother at 400 million ARR. And this happened last year. So I'll claim a little bit of credit for having predicted it already existed. Here it was.
Starting point is 01:34:44 They've taken some flack since the announcement for some of their marketing. And I think there are some issues with the FDA regarding how they're just jealous. Regarding how they market their GLP ones. But this is apparently, assuming the financials. are accurate, a case where we're now definitively in the era when a single person can create a unicorn using AI. And I should note, friend of the pod, Alex Finn, who appeared previously, also has a new company named Henry Intelligent Machines. Supported by you. Supported by me, indirectly by you. Supported by me that is trying to make this broadly available to the masses,
Starting point is 01:35:31 to enable everyone, not just Matthew Gallagher with his GLP1 startup, to enable everyone to create one person, AI-based conglomerates that achieve universal high income. That's the aspiration. Medvi is going to just spawn thousands of entrepreneurs that take their shot. You no longer need a team. I think what you need now is more judgment and taste and the squadron of agents.
Starting point is 01:35:56 Salim. And then actually, I've got a bunch of things to say. First of all, find your empty. and start using AI agents to build it, for God's sake, everybody just do that. Number two, coordination overhead is imploding. That's what this shows, right? AI shrinks the minimum viable team to, like, one. And it radically expands your minimum viable ambition, which is amazing.
Starting point is 01:36:18 And I think the headline here should be that AI founders are shifting, your arbitrage complexity in a scale that used to require entire departments. Right. Now, the company doing code, ads, support, analytics, all with AI, is basically a prototype of this whole AI native firm. And it's shifting everything away from capital and headcount down to orchestration skill. And so this is the entire principle of what we've been talking about around this. Every company needs to create an AI native digital twin. So we had out last week a review of the organization.
Starting point is 01:36:59 organizational singularity model that we've been working on with my community. So that's kind of past that tick box. And everybody's super excited about it. Next week or two, we'll have it ready for public viewing. It's hidden behind the event horizon, Salim. It's hidden. But we actually want, we've done so work to put in a chapter in there on how do you achieve, you know, the domain collapse that you talk about in solve everything.
Starting point is 01:37:25 How do you organize for that? And how can you create an organizational design to, achieve domain collapse and whatever you pick. I think the two put together will be unbelievably powerful. So I'm looking forward to sharing it to you guys. I'd like to take a second and dissect for those entrepreneurs listening. What do you need to do if you want to take a shot at your one person unicorn? And is Medve's business case uniquely suited for this or can we do it for anything? Dave, thoughts? Oh, there's so many. the opportunities here. Basically, what's going on is any complicated product or service that's
Starting point is 01:38:05 difficult to explain to a consumer, the AI is phenomenal at. But Anthropic and OpenAI and META can't do that directly because there's way too much negative PR. Look at the New Yorker article we're just looking at. They don't want to be involved in that. And so it's left for the entrepreneurs to build the companies. But if you, I don't know the full revenue base here, but if it's all GLP1, there must be a thousand parallel products that you could take that are complicated to explain and you just prompt and tune the AI. And also, you know, as the consumers are talking to it, you're gathering all that data and you feed that back into improving it as an next consumer gets a better experience. Yeah, if you get that virtuous cycle. Yes. So now there's, there's thousands of
Starting point is 01:38:48 these, thousands. I tend to think also they'll follow some sort of power law distribution. So if there are indeed thousands of companies to be built like Medvi, they're going to to be millions of smaller businesses. And I think in my view, one of the ways we realize universal high income, if that is economically realizable at all, will be with individuals overseeing conglomerates of lots of smaller-scale businesses. And that I'm much more confident can scale to millions or billions of people, each being entrepreneurs. How many times do we see in the YouTube comments people saying, you guys are overly bullish on everyone becoming an entrepreneur, but not everyone wants to be an entrepreneur. It's not for everyone. You guys are
Starting point is 01:39:33 overconfident that entrepreneurship is for everyone. But my counterpoint to that is in an era that's, I think, starting to dawn where what human entrepreneurship looks like is simply overseeing AI operators, a fleet of AI operators, completely transforms the nature of entrepreneurship. It looks a lot more like reading and responding to emails and engaging in Slack conversations than it does running a business. And I think that translates. transforms the nature of entrepreneurship to be something that people of all temperament could and having taste and having an opinion and having an MTP, those are elements that anybody can have. It's like, yeah. It's like anyone, anyone can have a limbic system and everyone can be the
Starting point is 01:40:16 limbic system for these AI fleets. We're going to, the one person entrepreneurs are going to be the limbic systems of one person unicorns. I think this is such an important point because we've, and we get this objection all the time. time. We almost want to have a full episode breaking this down for everybody involved and then taking them through a step-by-step arc where they can form their own conclusions around this. The idea that is an entrepreneur, you have to hold multiple hats. It's unbelievably difficult. You have to take on extraordinary risk. You have to put your family at risk, all that stuff. All of that washes away in the face of all of this. So this is such a great point you're making, Alex.
Starting point is 01:40:55 I had a meeting with the Minerva AI team earlier today. And you've heard of the rule of 40. Like, you know, a really, really valuable company passes the rule of 40. So you take your profits, say 20%, and your growth rate, say 20%. And if it adds up to 40 or more, you're a killer company. They're now a rule of 200 company. It's like, and their tiny headcount, you know. Oh, baby, go.
Starting point is 01:41:19 Yeah, fantastic. It's a wild, wild time. On this slide, I want to hit the last two bulletier. So the first one is that a recent field study experiment of 515 startups found that AI reorganized firms, where it's firms that reorganized around AI, used 44% more AI tools, they completed 12% more tasks, and they generated nearly two times higher revenue, 1.9x higher revenue. That doubling of revenue is from process change, not from product change. Really important.
Starting point is 01:41:55 So again, the data is critically. The other bullet on this chart, you know, Dave, you and I talk about this for Link Ventures and what we're seeing out of the MIT and Harvard ecosystem is that the average AI unicorn founder has dropped from 40 years old to 29 years old since 2020. So over the last 16 years, we've seen it, you know, go down from 40, down to 29. Any comments, Dave? Yeah, you know, the Wall Street Journal did a great article on us over the weekend edition.
Starting point is 01:42:25 Look it up, but they really focused on Vokara here, just because they're so, they actually just wanted to cover everybody, but that particular team is just so cool. They couldn't resist. There are tons of great pictures and the whole storyline. But if you want to see how it's actually done and get the inside scoop, just read the article in the journal. Because that age 29 averages.
Starting point is 01:42:45 Let's drop that article in the show notes, if we could. Yeah, that average age of 29 is actually overstated. It's even younger than that if you look at the median, because there's a couple old guys that blend into the average. But when you look at it, you know, there's no, there's no barrier. You just have to be fearless and the young people tend to be more fearless. And also there's no skill set barrier. You know, if you tried to start that company we were just talking about previously,
Starting point is 01:43:09 you'd need the engineers to build the websites. You need the seed capital to hire the engineers. It would take you like six months to get to market. Now you just vibe it up. You don't need the capital. Just go. You make this point and I make this point where we're talking to large companies. We say, listen, these entrepreneurs,
Starting point is 01:43:23 out there aren't smarter than you. They're just more fearless. They're willing and take more shots on goal on crazy ideas and fail over and over and over again until they hit something. And everybody else is trying to, you know, make sure they don't go backwards or lose anything or get embarrassed. Yeah. You know, just to bridge a couple of concepts here, you guys talk about domain collapse.
Starting point is 01:43:45 We've had domain collapse now in entrepreneurship. You have a purpose and you have a purpose and you're motivated. You can go do anything you want now. There's almost nothing that blocks you from getting in. Except your own. I'll tell you what else too. Except your own. You self-limit.
Starting point is 01:44:02 People self-limit way too much. They do and they procrastinate, which is the worst thing you can do right now. Like if you're at a program at some investment bank or whatever or a training, like get the hell out like now. Because this is such a golden moment and it'll last a while, but not forever. Then we're going to have ASI very. soon. And there may be other things that happen, but it's very hard to predict. But this is so reliable right now. It'll change your life. You just can't lose a day. You've got to go. I do think there's a limited window. Yeah, I love to talk with you about beyond the window.
Starting point is 01:44:39 But for an entrepreneur, don't even think beyond the window. It's just like, like focus on what works here and now, because Alex is right. It's a limited window. And so just, and it's all boats rising with a tie. You don't have to kill somebody else. You know, you just need to get in there and fill a void. So important, right? Yes, it's all, you know, it's a rising tide for everybody. Yeah. Welcome to the health section of moonshots brought to you by Fountain Life. You know, my mission is to help you use the latest technologies, including AI, to not just do your work at home, teach your kids, but to help you live a long and healthy life. I'm here today with an extraordinary physician, the chief medical officer of Fountain Life, Dr. Don Musilum, Dawn. Let's talk about cancer.
Starting point is 01:45:21 You know, I know from the member database that we have at Fountain, our members who come in who think they're healthy, it turns out 3.3% of them have a cancer in their body they don't know about. That's right. You know, the majority of cancers that we screen for, those aren't the ones that are necessarily taking the lives when found at a late stage. We know that when cancer is found early, the chances for cure are much higher. We know it's much easier to treat a cancer when found early versus when found late. What we're finding in our members is over 3.3% were found to have these cancers that were otherwise wouldn't have been found or detected. Yeah, you know, it's interesting.
Starting point is 01:45:59 You don't feel the cancer until stage three or stage four. And if you don't know what's going on inside your body, it's like driving your car with your eyes closed. And you can know. And so when members come through found, how do they detect cancers? So we're doing full-body MRI, and we also do early cancer detection screening. This is very, very important, and these are not typical tools used in the conventional care setting when it comes to prevention. This is a hard thing because currently these are not studies that insurance would yet be covering.
Starting point is 01:46:28 But the goal is to collect these numbers, do the research, and work hard to democratize wellness. Yeah. So at the end of the day, you can know what's going on inside your body. It's your obligation to know. So check out FountainLife. and go to FountainLife.com slash Peter to get access to the latest technology to help you detect cancer at the very beginning at stage one when it is curable before it gets to stage three or stage four in your world to hurt. All right, let's jump into our six topic, the $300 billion data center
Starting point is 01:46:58 crunch. So first and foremost, Dave, we called this one, buddy. Well, we got to dig up a quote or two from the podcast. and Elon coming together. Now, when we were pitching this twice to Elon, it was like you should buy Intel. Well, okay, he's partnering. Still maybe might buy it. So Intel says its ability to design,
Starting point is 01:47:21 fabricate, and package chips makes TerraFab actually work. The first pilot phase for TerraFab is $25 billion. That can mean revenue for Intel of $4 billion a year. Stock has been up now, I think, 40% since this has been a number. announced. Intel is contributing their 18A process node. It's a 1.8 nanometer class technology that is being built in Arizona and Oregon. Reminding everybody, TerraFab is one terawatt per year of
Starting point is 01:47:51 AI compute 50 times the current global output of 20 gigawatts. Pretty amazing, surpassing all the fabs on Earth. Yes. So the most exciting thing in the world to me, and I'm kind of a chip geek. And I was actually one of the first people, I was the first at MIT to build a neural network AI chip. Way, way, way back in the early days. And I just freaking love this. But you can see this coming a mile away. There's no other way to get it done. And this is like the first pitch of the first inning of this battle.
Starting point is 01:48:21 So it's going to be really, really fun to watch it evolve. It's exciting. You know, in Lipitan, you know, when I met with him last, where was it in someplace in U.S. And in Saudi, he did say he'd come on the pot. I'll have to reach out to him again and bring them on for sure. It's so exciting to see these companies coming together. And this is the way Elon can jumpstart TerraFab. And, you know, Alex, you made the brilliant point.
Starting point is 01:48:46 This is one of the most important things politically and for a world peace we can see. This could help avert World War III. Yeah. With 1.8 nanometer node process and Elon's vertical integration with Intel, that this could help avert or otherwise interfere with a Chinese invasion of Taiwan and disruption of the TSM supply chain and a global depression that would be perhaps caused by any such invasion and a world war. There are tremendous geopolitical implications of this. Amazing. Well, that's all inning one too. Itting two is super exciting because
Starting point is 01:49:22 Elon is already thinking about next generation computing substrates, botanic and then subatomic and beyond. You can't work with TSM on that. They're like a body shop beyond body. shops, just like a pure monopolistic, optimist. They're not an innovator at all. I'm really going to piss somebody off. Can you maybe I shouldn't say that? But Intel is a long history of innovation. It's a great partner to work with. And Liputon is an amazing CEO. If you look at his track record of what he's done for the other companies, he's come in to run massive turnarounds and success stories. Amazing background. Yeah. Now, this chart should just scare all American silly. 50% of US data centers are being delayed due to electrical equipment shortage or from Chinese supply.
Starting point is 01:50:08 So look at this pie chart here. So 17% of the data centers are uncertain. That may be due to financing. It may be down due to regulations. A lot of jurisdictions are making data centers illegal. And 50% are delayed or being canceled. And that leaves 33% of the projected data centers actually being built. this is existential for AI and this as you said brilliantly Alex is driving data centers into orbit
Starting point is 01:50:38 where we don't have to ask anyone's permission to the moon Alice to the moon or maybe to the moon anthropic not quite clear I'll give you I'll give you my spin on this well so the data center business is in full boom and all the business school guys come rushing in like they always do yes and they go out and raise a ton of capital and tell everyone Oh, I'm going to build a data center in Wyoming. Oh, I'm going to build a data center wherever. You can't get the chips. Did you think maybe you needed some chips for your data?
Starting point is 01:51:08 I think that's what's actually going on here. Because every chip that's coming out is getting used instantaneously. There is not an idle memory or processing chip anywhere in the country. So by definition, they overbuilt racks. And they just didn't plan ahead for the chips. And also, Jensen is locking up all the supply. I don't know if they necessarily anticipated how connection. he is. But, you know, you thought, oh, just go to a, you know, a website and buy a bunch of
Starting point is 01:51:34 stuff. It's not there anymore, guys. Sorry. Which is why Elon's vertically integrating, as he's always done. For sure. For sure. Well, he's going to try 100x the production, you know, like, yeah, it's not just, you know, own your own future. It's 100x your future. So I pulled this next chart up because I found it fascinating. You know, I've always believed in my heart of hearts that Google is the dominant force and it will win in the long run. So here it is, Google dominates AI chips and chip monopoly, owns the majority of specialized AI chips globally, TPUs, and H-100s. And it's an incredible story, you know, and you mentioned this, Dave, on stage with Eric Schmidt, that Google's chip ownership
Starting point is 01:52:21 reflects extraordinary foresight. They started building TPUs in 2016 before anyone was thinking about this stuff. Yeah. Somebody has to write that story because Eric said, you know what, Larry Page gets all the credit. He saw it coming way before anyone else. I'd love to interview all those guys and actually write that story. I was so close to Larry. I wish, you know, Larry's gone underground. I would love to reconnect with him. Got all my, you know, Sergey is there and in the thick of it. You know, Larry had had voice box issues and I think got out of the public eye. But, well, Yeah, brilliant individual. Let's go talk to him.
Starting point is 01:53:03 Well, Sergey is in the office. I'll be in California next week. Maybe I can crack him down and get through him to Larry. Or maybe he'll text you after he hears this on the pod. So here's a question. You know, if Google owns the majority of specialized AI chips globally, right? TPUs and H-100s, when are they going to run into monopoly concerns? because they had you know Sundar has to be you know playing four-dimensional chess around this
Starting point is 01:53:35 yeah yeah they have to start thinking about the next election about a year before the election because right now they have no problem because of the administration and and it's all about be China at all costs I mean look at this chart this teal color up top is Google China you know it's it's I love it when you're comparing companies with countries, right? So it's like SpaceX and Russian launches, SpaceX and Chinese launches. Google and China.
Starting point is 01:54:05 And then Microsoft is next. And then we see Amazon and let's see it's Oracle, X-A-I and other. But Google is just dominating. Yeah. Well, you talked earlier about people starting to soft sell and kind of, you know, keep the drama down. Google's way ahead on that curve.
Starting point is 01:54:25 Look at how far along they've come, and they hardly ever talk about it much, you know, relative to where they actually are. And that's because they don't want the antitrust breakup. And they, you know, they almost lost Chrome. They don't want Chrome to get ripped out and give into perplexity. They dodge that bullet. A different administration, though, and that would have happened. And, you know, they'd be broken into two, three companies now. Crazy.
Starting point is 01:54:46 I'll maybe take the position. This to me, I can't visually calculate the genie coefficient just by eyeballing it. But this to me looks like a competitive. market. And let's also remember Google with their own chips, their AI chips, they have multiple customers internal and external. They're servicing their search engine. They're servicing Google Cloud. They're servicing ads. They're servicing, I think people forget, Google owns something like 14% of Anthropic. Google is servicing Anthropic and external frontier labs. And they're building data centers for Anthropic. Yeah. Yeah. For sure. And by the way, there is a
Starting point is 01:55:25 beautiful relationship between Google and Anthropic, between Dario and Demis, there's a very close relationship there, which warms my heart. Helps that Google's a major shareholder, I'm sure. Yeah. Well, it also helps that those two guys so deeply care about safety, I mean, down to their core. And so that's kind of nice that two of the most powerful guys are cooperating on it, even though they are competitors in the market.
Starting point is 01:55:49 But then on the other hand, you know, they're competitors in the market. What's antitrust going to think about that? Hey, you guys are hanging out, having shots. You're not supposed to do that when you're competing. What's going on? So, yeah. Singularity makes for strange bedfellows where you see model vendors competing at the infra level. I think we'll see quite a bit more of that.
Starting point is 01:56:08 All right. I can tell you, antitrust has very little to do with merit and a lot to do with whatever the guy. Politics. I will make a point here that I think that even though, even whatever the next administration is, the strategic global importance of this means that they will let things be. That would be my idea. Yeah. Yeah, they're not going to slow them down for sure. Yeah. All right, let's go to our seventh segment here, our final segment, before we get to our AMA, which is proof of abundance. The world is getting better. So everybody, you know, there's so much negative stories out there around AI. You know, we say here on the moonshot pod that this is the most exciting time ever to be alive, a time where we can make our dreams come true.
Starting point is 01:56:51 And we want to demonstrate this coming age of not just abundance, but extraordinary abundance, you know, sustainable and super abundance. And so every week we're going to try and identify some of the articles out there, some of the breakthroughs out there that are driving this, just to give you conversational capital and to take you out of scarcity into an abundance mindset. So a few different things here. This past week, renewables hit 49.4% of global electricity. capacity. I mean, it's extraordinary. We're seeing renewables just really skyrocket. Solar drove 75%
Starting point is 01:57:30 of these new additions, 5.15 terawatts of renewables. This one just warms my heart as a lithium battery might. Lithium battery prices are down 99% down to less than 100 bucks versus 10,000 in 1991. So, I mean, guys remember the conversations around electric cars, can we have enough batteries, is it going to be too expensive? Well, we've seen the markets really drive the price down. And we don't have a lithium shortage on planet Earth. We have plenty of lithium. In fact, new battery chemistries are coming.
Starting point is 01:58:09 This is a very tangible one. The price of lab-grown diamonds has fallen below $1,000. bucks. So the average price of a two-carat lab-grown diamond has fallen 80% since 2020. So it's a thousand bucks versus a natural diamond, two-carat diamond at $22,000 to $28,000. Pretty extraordinary. And guess what? Your lab-grown diamond is perfect and no child labor. So really important. It's so funny in all the James Bond movies, the evil guy carries around a tube of diamonds to pay for the whatever. Now it's just Bitcoin. Yeah. Well, and in science fiction movies, you know, and like the man who sold the moon, diamonds are, you know, are basically like pebbles on the
Starting point is 01:58:56 I mean, it's just carbon. It's dense carbon. So much for De Beers, which as I understand it, as a result of lab-grown diamonds is in severe financial straits at this point. Thank goodness. Yeah. The Debeer's, you know, public relations campaign, one of the most successful in human history. Yeah. What is it? Three months of salary, young man, you should spend on your diamond. So what do you think people should give to their fiancé now?
Starting point is 01:59:24 Bitcoin. Obviously, that. How do you wear that, though? I mean, on a chain, like a... Orer rings, obviously. Oara rings, yes. For sure. A designer expensive aura ring?
Starting point is 01:59:38 That's what people are doing. I have a couple of thoughts around this slide. Yes, please. You know, the importance of this shows that abundance is a pattern across multiple domains. This is not a slogan, right? And the big challenge we're going to have is how do we now, how does society design institutions that distribute this abundance in a reasonable way? That's going to be the challenge that we're going to have to deal with. But I love these stories are so awesome across the board.
Starting point is 02:00:07 Yeah, AI created 640,000 new jobs in the U.S. in 2020. 23 to 2025 in our next WTF episode. We're going to talk about the economy, and we're going to talk about the conversation going on right now. Like Mark Andreessen is like, no loss of jobs is a myth. We're going to create more jobs. The economy is going to skyrocket. We'll have that conversation in that debate.
Starting point is 02:00:32 Saleem, you identified this fifth article, which I loved. So four robots install 100 megawatts of solar at one panel per minute. So let's take a look at this image here. Here's Maximo. This is a robot that is basically deploying 100 megawatts of solar in the California desert. So if I had more time, I would have done the quick calculation of how many maximos do we need to catch up with China. I mean, this is where abundance becomes very, very tangible, right? And once you get robots, energy, AI all reinforcing one another in your interoperable.
Starting point is 02:01:11 Intermost slip, boom. You know, abundance stops being theoretical. And it's so visible right now. So this now comes down to the distribution problem. We've had food abundance for decades now. It's been a distribution problem. Energy is getting to that same thing. It's just awesome to watch this.
Starting point is 02:01:30 There's also a whole bunch of secondary stories that are happening around the rise of explosion of solar across Africa. Pakistan is now generating most of its energy via solar. This is absolutely going to take over now. 100% buddy. It is a beautiful time. All right, let's go to our AMA questions for our mates. Gentlemen, we have four on the board.
Starting point is 02:01:54 Salim, do you want to choose the first AMA? I'm going to leave the singularity one because I think somebody else is going to pick that, but I'll take the second one. Question number one. As AI drives marginal cost towards zero, what prevents abundance, capture where corporations just pocket the savings as profit while keeping prices high. This is from a viewer at book quotes remix. Nothing will prevent it automatically.
Starting point is 02:02:24 Technology creates abundance, but the institutions are the ones that decide who captures it. If markets stay concentrated, then abundance will pool at the top. If you open up interfaces, increased transparency, decentralize lower barriers to entrepreneurship, all those gains spread. So governance design now matters as much as technological progress, which is where we've been focusing a lot of time and effort into this over the last few weeks and months. Okay.
Starting point is 02:02:52 Alex, I'd love you take your take on number two. Yeah, I have to take number two. It was designed for me. So question number two is, are we in the singularity or not? You keep saying we are. But Eric Schmidt said at the Abundance Summit that we're not, what's your take? And this is from brand karma.
Starting point is 02:03:08 Yes. We're in the singularity. Why are we in the singularity? Well, let's put aside the sort of superficial response that you say potato, I say potato, you call it intelligence explosion, I call it a discontinuity. There's some subjectivity to the definition of singularity. It's the term has been used and misused over the years by, again, coined originally by Werner Vinji and then popular. by Ray Kurzweil, friend of the pod, and then even more popularized by Peter and maybe used or abused various times by myself. Different people have used the term to mean different things. Ray used it in his original definition as more of mathematical singularity and event horizon,
Starting point is 02:03:58 beyond which we couldn't see what would happen next due to the intelligence explosion, cite to IJ Good. I don't agree with Ray on many things. One area where I don't agree with Ray is this notion that a singularity, if we define it as sort of an impermeable barrier or an event horizon beyond which we can't see due to rapid progress, I don't think that's true at all. I feel like I at least have, if not a singular vision, no pun intended, lots of different ideas that collectively map a reasonable probability distribution for what happens after the intelligence explosion. So scratch, that definition off. Then we get to the notion of a singularity as being a step function, a discontinuity in terms of progress. I don't think that definition holds water either. I think based on preponderance of evidence, every time people keep expecting a discontinuity, it ends up actually being smooth if you look closely at it. And I think if you say, look at this intelligence explosion that we're in the middle of,
Starting point is 02:05:04 perhaps with summer of 2000, with the first GPT class models that arguably represented general class reasoners, large language models or few shot reasoners. I can draw a smooth line between the availability of GPT, 1, 2, and 3 to where we are today as just a sequence of smooth sigmoids that were available internally as incremental innovations. but if you stack them cumulatively and if you go to sleep for a few years, you look away and you look back, it looks like a discontinuity.
Starting point is 02:05:39 It's not a discontinuity. Don't go to sleep. Don't sleep through the singularity because if you do, it'll look like a discontinuity and you'll actually think it was a mathematical singularity when it wasn't.
Starting point is 02:05:50 So that leaves us with my operational definition of the singularity, which is, I have a few different definitions. One is every sci-fi trope, everywhere all at once. which I think we're living through. Another is singularity as a set of instrumentally convergent inventions and discoveries
Starting point is 02:06:10 that were all technologically predestined to happen all at once. I think we're living through that as well. I'll pause the monologue and just say, I think every other reasonable definition of singularity doesn't hold water because every time you try to make the singularity a point in time, it breaks and progress just doesn't work that way. Therefore, we're in the singularity. Amazing.
Starting point is 02:06:32 Dave, why I take number three? Number three? Okay. So many of my favorite Alex quotes in just that one. How many cliches can I pack into one monologue? You needed a microphone, Alex, so you could just drop. You just, you needed a microphone. I need a piano keyboard to just pop out my greatest hits. I think by definition, a cliche would have to have been invented by somebody else.
Starting point is 02:06:56 If you made it up, it's not a cliche. Talking points then. We're going to be on stage first thing tomorrow morning together, Alex. I know, so I'm literally going to be sleeping through the singularity tomorrow morning when we're on stage. Just say everything you just said. And guys, listen, I want to just say thank you. Thank you for recording this late. For those you don't know, I literally landed at LAX two hours ago, rushed home, took a shower,
Starting point is 02:07:21 and came on at the top of this recording episode. I was in Morocco for 10 days with the family, riding camels in the desert. Oh, insert some pictures right there in the... podcast. They're so fun. Well, maybe I'll do it for the next pod. But hey, thanks for recording this one late. I didn't want to miss it. Okay, number three.
Starting point is 02:07:41 All right. I get three. Okay, where's the liability in agenic AI? These agents could go out of control and wreak destruction. Our society is set up for human liability. What about AI insurance? Yeah. Really a great, great point. This is from Jeff 5, 781.
Starting point is 02:07:58 It's a great point. It's actually not that hard a problem. It's another thing that's frustrating that nobody's working on it. Right now, the question is, where's the liability? Nowhere. The agent is anonymous. Nobody knows who owns it. There is absolutely nowhere. In theory, the author would somehow be liable, but who the hell is going to know who the author was? So it's going to be a zoo. This reminds me a lot, actually, when the internet was new, we were running a bunch of companies, including one called Job Case, and we were advertising on Google, and some competitor came in, and they're advertising on Google, and they're taking all the users,
Starting point is 02:08:30 and they're routing them right to this fraudulent ringtone download. And we go to Google and say, can you can you like do something about this? They're taking all the traffic away from our legitimate company. And it's like some Ukrainian group. And like six months later they got around to banning it. It was just like absolutely a zoo. And now it's all nice and cleaned up. This is a zoo and it's going to be a zoo until it gets cleaned up.
Starting point is 02:08:54 But, you know, Alex mentioned on the pod many times that you can, you can create up new legal structures that make the individual agents liable, and then you can have insurance for them. And we're going to have ASI to help us figure this all out. Yeah, exactly. We've seen this happen before, right? Because you need to mix product liability, operator liability, mandatory insurance layers, et cetera, et cetera.
Starting point is 02:09:16 We've done that for cars, aviation, finance. So we'll just figure this out. Right now, all our legal systems assume a human principle operating with a clear intent, and agents break that model. So we have to reinvent a hybrid. I have to add just on this topic, I was literally approached by an AI insurance saleswoman earlier today at the Quinn House in Boston. Seriously, I was sitting down having a lovely conversation. A woman walks over.
Starting point is 02:09:44 Over here is the conversation about AI and says, oh, you guys, you should be aware my company has started selling AI insurance. You all should get AI insurance. This literally just happened to be a few hours ago. Insurance against the singularity. AI insurance salespeople are a thing now. What are they selling? What are the insuring? Against AI misbehavior.
Starting point is 02:10:04 Oh, fascinating. My AI. You can purchase AI insurance policies now. Oh, my God. My AI made me depressed. I want insurance policy to pay off. Oh, my God. Okay.
Starting point is 02:10:15 By the way, I think reinventing the insurance industry is a massive opportunity for entrepreneurs out there. You know, I'm so, I'm so ready to disrupt the industry. It is so pathetically. you know, hundreds of years old. All right, number four, this is from Zizos Katziapis, like a fellow Greek. Number 656, once work becomes optional, would there be any reason to live in a big city? Will real estate and major cities collapse? There is no reason to live in a big city right now. You can, you know, plenty of jobs require nothing other than, you know, Starlink and a
Starting point is 02:10:55 laptop. So you can telecommute. We're going to be seeing autonomous vehicles and flying cars basically change the landscape of where you live and where you work. Yeah. Well, they're coming 2028, baby. And then we saw, you know, Elon posted about this where we're going to have basically caravans. I think I just came back from the Sahara Desert where there were caravans. We're going to have caravan vehicles, autonomous vehicles with Sterlingk on their roof. And people will live in nomadic lifestyle. So yeah, there'll be cities where you want to go to see, you know, I think human, human interaction, theater, you know, abundance 360 as a summit. I was always worried that, you know, we're going to digitize it and become fully virtual. Just the opposite. You know, we're selling out
Starting point is 02:11:44 early and earlier because people want this physical connection with each other. So we're going to need physical connection in the central cities, but you don't need to work there. You can go there for entertainment. You can go there to see the sites. You know, it's interesting. What is going to retain value in the long run, especially post-AISI? What's the long run? What time frame are we talking about? Five years. When did five years become the long run? Yeah, that's like way long. I think, you know, Disney World is going to retain value, large physical events. You're going to return retained value as ASI. I mean, which real estate is going to retain value of five years?
Starting point is 02:12:29 Not only just real estate, but organizational structures that aren't digitized and fully replicated. And we'll, you know, minerals, like minerals and mining are going to have huge increases in value. Yep, for sure. All right. Let's go to the second page here for each. Selim, kick us off. Oh, we've got more. And we'll speed run these.
Starting point is 02:12:54 I will take from a financial standpoint, once autonomy becomes mainstream, why would anyone own a car? This was from Neil Williams 4300. And this kind of links back to the city kind of question. They mostly won't own cars, at least in the cities, right? In rural areas, I think we'll see car ownership are maintained for a long time. But car ownership is an artifact of low utilization economics. Once you have autonomy converting the car from a consumer product to a service layer, essentially it becomes a subscription model. And car ownership starts to like owning your own your own elevator or something dumb like that. We've seen this precedent, by the way,
Starting point is 02:13:37 if you go back to the music industry, you used to have seven or eight music studios selling you cassettes, DVDs, selling you the physical scarcity, right? Then we digitized music and automated it and streamed it. And now you have iTunes and Spotify selling you abundance on a subscription model. That's what we expect to happen to. transportation, but also healthcare, education, energy.
Starting point is 02:13:57 Anywhere we have physical scarcity, the abundance model will take over. All right. Alex. All right, I'll take question number eight, which is data centers create wealth, but can you dive into how they create wealth for the locals specifically? This is from JKVT3443. Part of me wants to answer the question by saying, well, the inhabitants of the Artemis base on the moon that's going to be manufacturing a lot of these data centers. I expect to be quite wealthy.
Starting point is 02:14:26 I think frontiers are where wealth generically gets created. I think I've had this discussion multiple times with multiple Google founders. I think the general consensus is frontiers are what lead to often net wealth creation in the human economy. And in some sense, we had for a while run out of frontiers. You could point to science as the final frontier. I think space is more applicable frontier in this case. So how are data centers going to create wealth for locals? Well, we seem to be on a trajectory at the moment for moving data centers to space,
Starting point is 02:15:04 and the space locals are, I think, going to become quite wealthy off of the space economy. If I were to take the question slightly less giddily, I would suggest that for land-based data centers, we have every indication now, including with recent U.S. national policy, policies that data centers, because they consume so much electricity, will increasingly be driving local electricity costs down towards zero. There may be, in some cases, a spike of electricity prices in the short term. My expectation is, in the short term, they create jobs, in the medium to long term, whereby long term, I mean like five years, Peter's definition of long term at this point. They are going to drive local electricity costs. I expect.
Starting point is 02:15:52 down to near zero and maybe other utility costs as well because they need so much of it and they unlock so much value that they're going to end up doing the moral equivalent of paying the taxes for all of the residents of a given area. And there's employment in the manufacturing of them. And then it's a cottage industry that grows up around the data centers. There's a, you know, there's going to, data centers are going to be the central innermost loop and then they're going to be the ring roads around the data center is being built out. I should add one more snide remark on data centers creating wealth for locals.
Starting point is 02:16:27 I do expect on the timescale of five to ten years, maybe longer, maybe sooner, many of the locals in our solar system are going to be uploaded humans or derivatives of uploaded humans who will actually live inside the data centers. So we wouldn't want to deprive them of, we wouldn't want to deprive them of their condos in AWS East 1A. Data Center. Old age homes. I love it. Dave, you want to take seven? Seven? Okay.
Starting point is 02:16:56 With Elon's exponential ambition, does money stop mattering sooner than later? And will his ambitions drain supply lines in materials and talent even with working robots? So, and this is from No Now 6361. A couple ways I could interpret the question. So I'll take my best shot at, does money matter to Elon? Not at all. he's way beyond that. He cares now about the future of the world
Starting point is 02:17:24 and being in interplanetary species and that's his total focus. And it takes money to get there. He doesn't want to lose all the money, but he has plenty. Will his ambitions drain supply lines and materials and talent even with working robots? The answer there, it's a great question. I think the answer there is no
Starting point is 02:17:42 just because of the way the timelines work out. So he would exponentially expand at any rate he possibly could, but he's limited by ASML machines and a few other constraints that will keep us on Earth for three or four or five years. Then we'll be in space, we'll be mining in space, we'll be constructing in space, we'll be deploying all the dirty stuff in space,
Starting point is 02:18:04 the nuclear reactors, fusion reactors in space, and it won't drain the earth of key materials at anywhere near the rate that there's anything to worry about. So I think there's only two outcomes for the world. There's a world where a terrorist uses AI, to destroy us all. And there's a world where the earth is a shining jewel of perfection for thousands and thousands of years that hasn't been drained of critical resources. And it's just perfect forever. So those are the two likely outcomes. But I'm going to add, I think the question here
Starting point is 02:18:38 is, do we enter a post-capitalist society where money means less and less? And, you know, Elon did say that. He said, don't save for retirement. You know, in the last conversation, I had with him during the Abundance Summit, I said, so just as you're becoming a multi-trillionaire money means less and less. And he said, yeah, kind of. Peter, that would be a fun, like, debate or discussion episode. What does post-capitalism even look like? What is Star Trek economics?
Starting point is 02:19:06 Yeah, there's a great book. Yeah, I love it. Zero marginal society that Jeremy Rifkin wrote in which, you know, at the end of the day, everything costs energy, raw materials, and information. And those trend towards minimum zero cost. Information is open source. Energy is from the sun or fusion or zero point, whatever comes next. And material costs, well, you know, as robots and mining material and mining robots get
Starting point is 02:19:36 better and better, the cost of that goes down as well. So we do enter a post-capitalist society. I hate to say it, but, you know, that's ultimate abundance. I'll take number six from M. Openness Elstrom underscore Ryder. Each of you have high openness, high pattern recognition, and outrageously high optimism. Really? Do these traits complicate your ability to objectively predict AI trajectories? You know, here's the reality.
Starting point is 02:20:10 Most people are hobbled by their cognitive biases. of negativism, where we tend to actually not project exponential change, but project linear change. And we tend to project negative outcomes versus open outcomes. I think we've all trained our mindsets differently to be an exponential mindset, an abundance mindset, a moonshot mindset. And I think those mindsets are far more aligned with this period of the singularity than the historic mindsets that evolved on the savannas of Africa, which most everyone on the planet, unfortunately, are hobbled by. I don't know if you guys agree to that, but that's my point of view.
Starting point is 02:20:57 Yeah. Well, and the second part of the question is, are we, you know, are we excessively optimistic about AI's trajectory? And I guarantee we are not. We get the court side seat that Elon was talking about. We get that view. And, you know, Alex is hands on with every detail. Salim's play. with every model as they come out. I'm telling you, everyone is the opposite of that. They're way underreacting. This is much sooner than everyone. Eric, Eric Schmidt said it nicely.
Starting point is 02:21:23 He said, we are under-hyping AI and the impact of AI. Yeah, you know, if people aren't feeling right when I was, right when I was 18, I started an AI and it was always way behind, way behind. Like, it was, everyone was saying 20 years from now and then 20 years would go by and nothing had happened. This is the opposite. And that's another reason why people in academia, who should know better are underreacting.
Starting point is 02:21:47 But they've been through this so many times. They're kind of jaded. Sorry, Alex, I catch you off. I was just going to say two things. One, for a number of years, I left AI to focus on nanotech thinking nanotech was the critical path to singularity. So I don't think I can be accused at least over the long term of being overly optimistic. The second point is, if you're not feeling the AGI right now, you're just not paying attention.
Starting point is 02:22:08 Yeah. It feels like AGI.I. It feels like the singularity. All right. do a call out to all of the creators out there. If you want to give us an outro song or if you want to give us an intro song, please send it to Media at Diamandis.com. Also, if you're a creator, go check out FutureVisionxprice.com. It's a competition, the largest competition for basically trailers for the movies you'd like to see created the future versions of Star Trek. We've raised
Starting point is 02:22:43 three and a half million dollars to award creatives with creativity in particular hopeful abundant mindset creativity all right let's check out this point can i make a very quick point uh you know how people have pets that sometimes look like them yes what i really love is we've got uh people submitting intro and outro music that must be much like them cj true heart right we know cj's got a true heart and And here we have David Drinkle. Awesome. I love it. The term,
Starting point is 02:23:16 the term Sileem you're reaching for is nominative determinism. And, yeah, see it everywhere. Names determine outcomes. Yeah, my son, my son's named Jet, and he is a speed runner in track. So there you go. All right. This song from David Drinkall,
Starting point is 02:23:33 already inside 2028. Let's take a listen. That's really professional. That was amazing. That was like TV quality, man. Yeah, David captured my scenario for automagical mornings. Amazing. Wow.
Starting point is 02:26:08 I thought that was, you know, live footage in the beginning. It's so good. Yeah. Gentlemen, it's so great to be back with you guys after a 10-day hiatus to all of our listeners. I feel replenished. I feel replenished too. A lot more coming. Thank you for staying with us.
Starting point is 02:26:24 Excited for 2020. What year are we in? 2026. It's going to be an awesome year. We're going to have to count the seconds soon. Love you guys. Be well. Take care for tomorrow.
Starting point is 02:26:36 Welcome back, Peter. Thank you. Thanks, Peter. Great to be back. If you made it to the end of this episode, which you obviously did, I consider you a moonshot mate. Every week, my moonshot mates and I spend a lot of energy and time to really deliver you the news that matters.
Starting point is 02:26:50 If your subscriber, thank you. If you're not a subscriber yet, please consider subscribing so you get the news as it comes out. I also want to invite you to join me on my weekly newsletter called Metatrends. I have a research team. You may not know this, but we spend the entire week looking at the Metatrends that are impacting your family, your company, your industry, your nation. And I put this into a two-minute read every week. If you'd like to get access to the Metatrends newsletter every week, go to Deamandis.com slash Metatrends. That's DeAmandis.com slash Metatrends.
Starting point is 02:27:23 Thank you again for joining us today. It's a blast for us to put this together every week. This episode is brought to you by Tellus Online Security. Oh, tax season is the worst. You mean hack season? Sorry, what? Yeah, cybercriminals love tax forms. But I've got Tellus Online Security. It helps protect against identity theft and financial fraud
Starting point is 02:27:58 so I can stress less during tax season or any season. Plan started just $12 a month. Learn more at tellus.com slash online security. No one can prevent all cybercrime or identity theft. Conditions apply. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.