Moonshots with Peter Diamandis - Sam Altman’s Attack, Amazon vs. Starlink, and What Opus 4.7 Actually Means | #248

Episode Date: April 18, 2026

In this episode, the mates cover Anthropic Opus 4.7, AI backlash and unrest, the Stanford 2026 AI Index, young workers getting squeezed out, AI-driven store automation, data center bans, satellite war...s, transhumanism, and speculative futures like uploads and space colonization. Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends   Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding      Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy   Your body is incredibly good at hiding disease. Schedule a call with Fountain Life to add healthy decades to your life, and to learn more about their Memberships: https://www.fountainlife.com/peter  _ Connect with Peter: X Instagram Connect with Dave: Web X LinkedIn Instagram TikTok Connect with Salim: X Join Salim's Workshop to build your ExO  Connect with Alex Website LinkedIn X Email Substack  Spotify Threads Listen to MOONSHOTS: Apple YouTube – *Recorded on April 16th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 A 20-year-old Texan threw a Maltuff cocktail at Sam Altman, San Francisco House. Suspect was on something called the official pause AI Discord server list. The state of Maine passed the first ever statewide data center ban in the United States. Social unrest coming as a result of people's fear and people not getting jobs. Only 23% of the public is optimistic about AI. 99% of the people you bump into on the street are underreacting and unaware. If you don't want to use it, fine. Let other people use it and get the benefits of it.
Starting point is 00:00:34 Anthropics Opus 4.7 dropped. It is moderately interesting. Is it mythically interesting? No. The new guidance is use prompts. Use prompts for everything. The problem is it's sort of an outburning effect where I want mythos access. Amazon and Apple team up to compete against Starlink.
Starting point is 00:00:56 I would bet that Apple in short turn, ends up pitting Amazon the new global star owner against SpaceX Starlink. Elon does not stand still. So as we started recording this episode today, Anthropics Opus 4.7 dropped. So we wanted to do a quick pickup, insert it here at the top of the show, to discuss what is Opus 4.7? How does it compare it to 4.6? Mythos, of course, we're here with our resident genius on all benchmarks, Alex Weezner Gross.
Starting point is 00:01:33 It is moderately interesting. Is it mythically interesting? No. Is it incrementally interesting? No. It's a solid release. I've been using it for the past few hours. I ask it to, my standard go-to, as loyal viewers of the pod may recall, is asking it to generate
Starting point is 00:01:51 a cyberpunk first-person shooter game design that's visually stunning. And it generated something that was visually stunning. The benchmarks are interesting. The bio benchmarks in particular are interesting. It's a solid release. It's probably, if I had to guess, a further post-training of some existing model, could be a distillation of a larger model, could be a distillation of mythos, potentially, not quite clear. But I would say it is a solid point release of opus. And the problem almost is an expectation, anchoring one, having seen the eval results for mythos, or mythos, as you like to say.
Starting point is 00:02:30 I like calling the mythos, yes. The problem is it's sort of an osburning effect where I want mythos access. Give me mythos access. And then when you compare the opus 4.7 benchmarks with mythos, you feel, I don't know, a sense of... Cheated, underwhelmed. I was going to go with Onwee, but you can pick your own superlative here. So if you look at migration, so I think it was particularly instructive to look at migration notes between 4.6 and 4.7, the biggest change that I... I could see is that all of the dials and hyperparameters that used to be present in 4.6 and earlier, like temperature, for example, there's no temperature knob anymore. I think that's really instructive. There's no ability for reasoning to control the number explicitly of reasoning tokens that are allowed by 4.7. Now, everything is down to a handful of categorical settings where extra high reasoning is the recommended default maximum mode, and then there are lower reasoning
Starting point is 00:03:29 efforts than that. And I think we're seeing, in some sense, an end of an end of an era where the earlier controls that we used to have, remember back in the good old days like six months ago, it used to be possible to turn the temperature of a frontier model down to zero to get quasi-deterministic behavior for those who care about that sort of thing. No longer possible. Now you're just told in the documentation, you want determinism, forget about it. Temperature equals zero, never was deterministic in the first place. Now everything, the new guidance is use prompts, use prompts for everything. Prompts are the new dials and the new hyperparameters,
Starting point is 00:04:05 and if you want something, like say a reasoning model to emit guidance regarding its reasoning trace every three seconds, now you're supposed to ask for it in natural language. The knobs are gone. Dave, you're more excited about the model than a lot of people are. Yeah, well, it's interesting. It dropped three hours ago, so I've been using it for three hours now. But right out of the gate, you know,
Starting point is 00:04:29 it dropped into cursor just fine, just click and go. It's in Claude Co-work just fine, click and go. But then Claude code, it said, well, you know, you've got to update your terminal. You've got to update your node. So, you know, I noticed in computer use, it's notched way up in its score, and it had no trouble manipulating my computer to install itself, installing a new version of node, installing a whole new terminal that I didn't have on the machine before. And I don't think 4.6 would have done that.
Starting point is 00:05:01 Also, I kicked off a whole bunch of agents. Every time I kick off an agent, I give it, or it gives me a budget estimate, so how much money it's going to spend. And these budgets came back very elaborate and very big. So it's selling me on using more of itself. I don't know if that's because it costs more or it's just a better salesman than 4.6 was. But noticeably expensive. But, David, it could be persuasive as well. So a major difference on the agent team's front is in 4.7 now, the new best practices,
Starting point is 00:05:31 you're just supposed to tell it in natural language how many subagents you wanted to use. It's no longer, it's being deprecated, as I understand it, this notion of specifying as a parameter I want you to use. Oh, you know, that'll be, so my agents have been doing that for a while now, but it may have actually been more intelligent about using more parallel agents to get the same job done. Spend more money, please. Well, and it seems to come back very, very fast. So maybe that's exactly what's going on. It's just spending more of itself to do more in parallel.
Starting point is 00:06:00 Can we jump into this misaligned behavior metric here? So, you know, one of the things that we've been hearing, of course, is about what Mythos could do. It's interesting that they turn down, and the lower score here, that red bar in this image, is reduced misaligned behavior. Is that a significant change? It seems, you know, somewhat small. Every little bit for defensive co-scaling, as we talk about on the pod counts, I think there's actually another behavioral alignment trends that isn't on this slide that's worthy of note, which is in the past, I think, 48 or 72 hours Anthropic published a paper on using a smaller or weaker model
Starting point is 00:06:41 to supervise the alignment of a larger, stronger model and found that it worked. And this entire exercise is a proxy for humans, which are either already or about to be, effectively weaker, weaker intelligence is supervising the stronger intelligence that that works. And I think this bodes very well for sort of a tower of alignment where the weaker meat bodies, if you will, that are humans unaided biologically are able to contain and align superintelligences that are stronger capability-wise. So this was Jeffrey Hinton's approach, right? He said the example of where a weaker, smaller being, you know, gets the attention. and focus and support as a child with their mother.
Starting point is 00:07:25 Yes, maternal instinct. Jeff Hinton was focused on what I would call the digital oxytocin approach of, let's use hormones as a means for alignment of super intelligences. I'm not sure the neuroendocrine system generalizes quite as well to superalignment as Jeff does. It's a thought, but I think having, if we can subtract neuroendocrine systems out of the picture and subtract digital oxytocin out and avoid sort of, gender and sexing the AIs and instead just focus on weaker intelligence as aligning stronger ones, I think we'll be in a more stable position.
Starting point is 00:08:00 Awesome. All right. So that's our coverage of 4.7. Wait, I have a couple of quick comments. All right. Yeah. So there was a one thing I noticed was that the images that Opus 4.7 accepts are now three times bigger than before.
Starting point is 00:08:16 And this is huge for corporate stuff because there's so many diagrams, PowerPoints, PDFs, etc. that can now be scanned visually that couldn't before. And what for me, as I'm reading the reviews and playing with it a bit, this seems to be a very, very solid, reliable upgrade with a much bigger context window for workflows and more agentic AI. So that trend towards that like that whole organizational
Starting point is 00:08:39 collapse of middle management, redoing things, pushing more and more into the model with reliability seems to be the really big outcome here. If I could just comment on that, I think it's really, striking that opus still after all this time is able to understand images but is unable to generate images. I don't think it's... Oh, you're so right about that. Oh, my God. It's a nightmare. It's not, I suspect it's not for lack of capability. Anthropic has many talented research engineers. I suspect
Starting point is 00:09:11 it's because they're just viciously focusing on dollars of economic value created per token and have judged that image generation is not as economically productive as... It's annoying as hell because it can create incredibly complicated products for you. And you say, well, can you just give me an architecture diagram or a picture that shows me what you did? And it generates pure crap. And you're like, well, that didn't help me. It does beautiful text. And you can hack it by saying, well, generate a language that describes the image.
Starting point is 00:09:42 And then you can take that and then use that in another AI to generate an actual image. And that works fine. But when you ask it to just create a diagram for you, yeah, it's absolutely great. garbage. Alex, where are you flying to? Yeah, no, so I'm here, Peter, reporting from the front. I'm in a car a few blocks away from Steve Jobs, old house, and old Palo Alto. And in a few hours, I'm scheduled to fly back from SFO to Boston, Logan. All right. Well, thanks for making time available, gentlemen. That's Claude Opus 4.7. Let's get back to the episode. Hey, everybody. You may not know this, but I've got an incredible research team. And every week, myself, my research team, study the meta trends that are impacting the world. topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology. And these Metatrend reports I put out once a week, enabling you to see the future 10 years ahead of anybody else. If you'd like to get access to the Metatrends newsletter every week, go to DeAmandis.com slash Metatrends.
Starting point is 00:10:39 That's DMAANDIS.com slash Metatrends. Everybody, welcome to Moonshots, your number one podcast in AI Exponential Tech and keeping you optimistic during these days of of crisis news network conversations. Gentlemen, Peter Diamandis here, your host in our Moonshots podcast studio. Excited. I need to have you guys here one day. So, Salim, where are you on the planet?
Starting point is 00:11:04 India. I'm home in New York. I'm home in New York for us. That's a rare event. But Dave and Alex, you guys are in the great city of San Francisco, I gather. Yes, we are. Actually, three of the four of us are in California today.
Starting point is 00:11:19 Amazing. It shows you where things are happening, I guess. The future almost Starfleet Academy, obviously. Yeah. Well, everybody who's moving to Texas in Miami. DB2, AWG, and Saleem, always a pleasure. A lot of news. Our conversations here today, everybody is to keep you both optimistic, hopefully,
Starting point is 00:11:37 and let you know what's going on in the world in a way that keeps it fun and gives you some insights. We're going to be trying always to bring it back to what does it mean for you, as an investor, as an entrepreneur, as a student, as a parent. So that's the conversation getting you ready for the future. All right, let's jump in. Our first conversation comes from Stanford. Dave, you're not far from there, are you?
Starting point is 00:12:00 I can see it out my window here. All right, here is Stanford's Lab for Human-Ccentered AI just dropped their 2026 AI index. It's a definitive annual scorecard on the state of AI. This is their ninth edition. It's being led by Yolanda Gill and Raymond Perrault, and our dear friend Eric Bernyawson, heads, you know, quick hello to Eric out there.
Starting point is 00:12:26 Five major takeaways on this report. I'll run through them and then let's have a conversation about them. The first one, not a surprise. AI is getting scary good, scary smart on various benchmarks, in particular software engineering. It's gone from 60% on that benchmark to 97% on this SWE benchmark. The models, as Alex, you've been saying, forever are now beating the top PhDs in science and math. Gen A.I is hitting 53% global adoption
Starting point is 00:12:56 in just three years faster than PC and internet. China is leading research while the U.S. is leading model development. We'll get into that. One of the things that was interesting, there's an index for model transparency. You know, how transparent are the foundation models? And that index has dropped from a score of 58 down to 40, meaning that the most powerful models are now the least accountable. So what does that mean? All right, two more things. People don't trust AI. Not a surprise, but the numbers are pretty shocking. Only 31% of Americans trust the government can actually regulate AI. Only 23% of the public is optimistic about AI. And interestingly, in contrast, versus 73% of the experts. So the experts who know about it,
Starting point is 00:13:45 far more optimistic than the public. And then one last item, AI incident. So there are documented harms from deploying AI systems. Those documented harms rose from 600, I'm sorry, rose from 233 to 362. All right. So what does this all mean? A lot going on. Yeah.
Starting point is 00:14:09 So Dave, if you want to jump in first, I mean, scary goods. scary fast, you know, here's some of the numbers. What are your thoughts? Alex saw this report and he immediately said, we have mentioned every single thing on this on the podcast already, at least two, three months ago. But I love the fact that it's all consolidated in one report and then that Stanford brand is on it.
Starting point is 00:14:33 Because, again, you know, 99% of the people you bump into on the street are underreacting and unaware. And so the more it gets consolidated and clarified, the better for everyone, I think. Yeah, that's the reason we left it in here. It's a summary, and there are a few important points. And one of the themes that we're going to be talking about in the first few docket items, stories of the docket here is the level of fear and unrest that's mounting.
Starting point is 00:14:59 That needs to be solved. Yeah, and also the contrast, you know, Sam Fran, where Alex is right now, where I was yesterday, and any other random city, the contrast is getting super, super wide. You know, as I was walking through Market Street, at least five people behind me were saying different conversations, Anthropic this, you know, Opus 4.7 comes out tomorrow, and it's just every conversation is centered around this. And then you go to kind of Middle America and people are like, I don't know anything about it. All I know is it's scary. The unknown tends to scare people, which is why you see that 23% optimism number there. Alex, you're right. We were texting this morning and saying, hey, this isn't news.
Starting point is 00:15:44 I said, but I want us to have the conversation here because this information in a distilled fashion is important for people to see and hear. Alex, you want to jump in on any of these? We have this chart here. I'll offer a hot take on this one, Peter. So the idea of Stanford reports, so this started a number of years ago,
Starting point is 00:16:01 the notion was Stanford would spend the next century a hundred years worth of annual reports documenting the progress of. AI. My hot take on this one is too little, too late, too frequent. We cover this like two times per week on the pod. I cover it daily in my daily news that are the innermost loop. I think an annual cadence is just woeful. We talk about not sleeping through the singularity. I think an annual report on AI is quite literally sleeping through the singularity. It's imprecise temporal resolution to capture all of the advances.
Starting point is 00:16:38 So we're like hearing about things a year after they happen. The chart undercuts the report. Look at the green line on the chart. That's a genetic use of AI. And look, that's 2024 to 2024. Stanford and Eric, friend of the pod, Eric, up your game. We need maybe like daily reports, not annual reports too slow. If only we weren't human.
Starting point is 00:16:57 If only we had our cyborg implants, that would be a lot of use here. To be fair to Eric, he 100% agrees with you, Alex. is pushing as hard as he can. Getting Stanford to move is, you know, like pushing a glacier. You're dealing with a legacy institution here. I would like to hammer on the government statistic where people said this many people distrust AI. Well, it turns out exactly the same number of people distrust government. Yeah, the Congress rating is like 21%. Yeah, and the trust in federal government is like 33%. So it's exactly the same. So I don't think that's Tesla. People are just not trusting anymore. Well, we've been steadily eroding
Starting point is 00:17:33 trust in government for 50 years in the U.S. So there's a, there's a trend. This is just correlating right to. Well, the contrast with China is incredible, though. 80% of people in China are optimistic about AI. I don't know how they feel about their government, but it's not, you know, human nature. It's something in the system that's making a difference because clearly China's the exact opposite. Speaking about China, here are the charts out of this report. The first one is showing the number of major models coming out of China, which are now at 30 and the U.S. at 50. and on the other side, AI publications coming out of China have just exploded compared to the U.S. Alex, I'd love your take on these charts.
Starting point is 00:18:13 I commented on this right after Nureps at the end of last year. Yeah, I remember. The language that I heard the most in the hallways at Nurep's, the largest academic AI conference, was Mandarin. It wasn't English. China, I think the irony here, maybe what's being buried, the lead, is that China itself is moving in the direction of, of what the West has done, which is closed source models. Some of the latest Chinese frontier models are themselves closed and API first. They're no longer open weight first.
Starting point is 00:18:45 China, this is documented elsewhere. I think it was Epic documented that China's compute training capacity is approximately 10 times less than that of the West. China is publishing more and reference Nureps. We see that in the academic literature. but in some sense I would view that as sort of leading from behind that because the Western models and the Western Frontier Labs at the moment have the lead, there's less of an economic incentive. There's less pressure for them to publish their advances.
Starting point is 00:19:17 If, on the other hand, the whole balance tips and if for whatever reason China algorithmically leapfrogs the West, I do expect the entire equilibrium of Chinese open publications, Western closed attitude to flip completely and we may see some equilibration there. What do you guys think about the model transparency drop on the score from 58 to 40? I don't know how accurately that's being measured, but having the most powerful models in the world becoming less transparent because it potentially slows them down sounds concerning. Any thoughts?
Starting point is 00:19:53 I think it's very much a trend that's not going to reverse, because if you look at the last bullet AI incidents, you know, that's going up, but it's going to go way, way up. And now you've got Molotov cocktails being thrown at Sam Altman's house and gunshots at his house. And it's inevitable that the models become so smart this year that they become a terrorist threat, they become a bio-weapon threat, they become a chemical weapon threat. And the U.S. labs are absolutely not publishing papers anymore, absolutely turning their research budgets internally. You know, the self-improvement cycle is in full swing. China, like Alex said, is kind of leading from behind. They're acting more like America used to act with a much more open entrepreneurial economy,
Starting point is 00:20:36 more and more models, more more companies creating models, more documents coming out. But the U.S. is going the other direction out of fear. And it ties directly to the public reaction. You know, 23% of people are optimistic. That means a lot of people are worried about this. And the labs are reacting to that by saying, okay, we're going to slow play our dialogue a little bit. We talked about that about six months ago. like, why are they underselling the capabilities?
Starting point is 00:20:57 Well, this is exactly why. And then why are they, you know, turning all of this research internal. Well, this is also why they're worried about the global threat of AI. Alex, you're going to be just good. Yeah, add transparency can take on putting aside how Stanford defines it. Transparency is a double-edged sword. It can, in some sense, pro-transparency can also mean pro-pro-proliferation. If one is concerned, by the way, I am not, but if one were concerned about proliferation of advanced
Starting point is 00:21:30 potentially threatening AI capabilities, transparency is not necessarily what you want, maybe a limited form of transparency into, say, a threat analysis or the sorts of threat profiles and red-teaming analysis that have become fashionable for Frontier Labs to release, maybe. But in a certain sense, the limit of transparency is publishing the weights and publishing the models. If you're concerned about threats of a variety of sorts, X-risk, if you will, from AI, then transparency may be the exact opposite of what you want. You may be, in fact, anti-transparency if transparency becomes equivalent to proliferation. And for the record, for avoidance of doubt, I think transparency from a commercial perspective
Starting point is 00:22:15 can be used as a strategic advantage, as we've seen with the Chinese labs. It can also be commercially disadvantageous. I think a certain amount of transparency in the sense in which, say, as we discussed in a couple of the most recent pods like Project Glasswing from Anthropic, where there's very aggressive pen testing and staged release of advanced capabilities that could have major cyber defense and cyber offense implications, that sort of transparency, I think, is quite helpful. But do I think that we should, in sort of an unselfconscious way, push for all of the model weights from every frontier lab to be made, quote, unquote, transparent in the name of some sort of safety, I think that that will backfire almost immediately and alignment is the twin of capabilities. So, Salaim, I want to hear your thoughts on this. I mean, this report this year probably has bent more towards the negative dystopian side than it ever has in the past, which is concerning. It's going to be one of the themes we're talking about here.
Starting point is 00:23:18 It is, and it's causing a massive leadership challenge, which is how do you govern systems that you don't know how they work and you barely understand them, but we can't afford not to use them, right? That's causing a huge challenge and that's going to kind of continue for the next months and years. So I encourage folks to pick up this report and read it. You know, we're focused on the optimistic side of the story here, but there's a realistic side of the story here as well that needs to be considered and addressed. Also, out of this report came another story that the youth is being hit the hardest by AI. So employment among U.S. software developers in the young age bracket, age 22 to 25, has dropped nearly 20 percent since 2024. This is happening at the same
Starting point is 00:24:04 time while older developers have grown their headcount. The same pattern repeats across customer service, legal support, administrative roles. And critically, I think the important story here is this isn't happening through mass layoffs. Companies aren't firing. young workers. They're not hiring them in the first place. And so we're seeing this challenge. And I think, you know, we had a conversation in the last pod about Mark Andreessen saying the loss of jobs was a, you know, was fake news, that we're going to see this uptick. Well, and we've said both of these things are holding true. We're going to see an increase in the, you know, in the GDP and the profitability that's going to drive more employees and more companies being formed. But at the
Starting point is 00:24:50 same time we're seeing the lower end of the spectrum. You can see it here in these charts. On the left-hand side, those jagged lines going down to the right is the early career age 22 to 25. We see that below as well. And then in the chart on the right, what we're seeing in software and customer service and all exposed occupations, the younger category, losing job growth. The older category, age 30 and higher, gaining in job growth. And this is a challenge. As I've said before, it's the young testosterone-laden males. I want to categorize our younger versions of ourselves that way,
Starting point is 00:25:34 who are not getting jobs, not being able to buy a house, not starting a family who are likely to get angry. It's sort of a tech version of Arab Spring, if you would. Salim's thoughts on this one? Well, I'll take the positive here, which is that if young people aren't getting hired, they'll be forced to turn into entrepreneurship. And young people going to entrepreneurship
Starting point is 00:25:54 is the best possible thing that could happen for the economy, right? Beautiful. Not to diminish the, what do you do with this? I think that's a big challenge we have to face. Dave? I had a great meeting yesterday with three Princeton seniors. They're torn right now between sticking together and starting a company.
Starting point is 00:26:14 They're all chip design gods working on AI. designs. One's got an offer at NVIDIA, and he's one of the few people that actually got a job offer, so he's so excited about it. I'm like, dude, the ASI window. Maybe in the future, you're like, oh, damn, I got a job offer. I don't want that. No, this is the point, right? You should get a job offering to go, oh, my God, what am I thinking? Exactly. That's exactly what I was trying to tell him. I was like, look, guys, you understand your big, huge Princeton brain is the most valuable thing on the planet right now. It's going to be a complete commodity two years from today, post-ASI. You have this window of opportunity to take advantage of that brainpower and create something.
Starting point is 00:26:50 And if you fritter that away, one's got an NVIDIA job offer, one's got sort of a banking, and one's got a grad school job offer. And I'm like, look, all three of those are the worst choice you could possibly make in this moment. Stick together. Start your company. You have to adapt the metaphor, but it's not big, princess, big, juicy, beautiful Princeton brain. That fulfills the metaphor. Oh, I see.
Starting point is 00:27:12 No, but, you know, right now, if you look at the prior slide, we have access to the absolute the best AI models still. That won't last forever. So you've got the combination of ASI imminent models getting closed down and less access a couple of years from now. This is the window right here, right now. And I think, Alex, you mentioned this on the last pod, right? There's a limited window in which you can do something magical and meaningful. And so go for it now. Don't wait. Yeah. And also, I mean, my two sense on this is there's an entire economy that needs to be transformed and collapsed and automated. And so in some sense, I look, I see this in a variety of companies.
Starting point is 00:27:50 I see the agita that is connected with, quote-unquote, junior software developers finding it harder in some spaces to find jobs, quote-unquote. On the other hand, the market for talent in, call it, head of AI, or call it AI lead roles, has never been hotter across a range of industries. So I think some of this may be just routine displacement as the market finds a new equilibrium. I don't think it has to necessarily be just bad for fresh CS grads from top universities. I do think there's an entire economy of call it non-traditional roles and non-traditional sectors that is absolutely starved for technical talent.
Starting point is 00:28:37 And I think to the extent that any of the short-term trend open parence note that the trend line ends at September 2025, and another reason why it's more important to do this daily or bi-weekly rather than just once per year. I think this has a habit of self-correcting, and I've seen studies even over the past two to three weeks that suggests that this trend has reversed itself
Starting point is 00:29:03 in the past few months. And I think I can translate everything you just said into now is the perfect time to be nimble and not think of yourself as a great coder or a great chip to. designer. It's like that that skill has a lifespan of a year at the most, but you're a great thinker, a great entrepreneur. You can master these AIs and stay ahead of the curve if you're nimble. Just don't get stuck in some silly career path that's going to perfect your chip design,
Starting point is 00:29:29 you know, or your Python writing, code slinging skill like that is a complete commodity within a year. So just stay ahead of it and just keep listening to the podcast and move. Two things. One, this is politically, this type of, type of drop is politically invisible. You know, there's no unemployment spike. It's just a hiring freeze, so it doesn't show up on any of the standard labor market monitoring. So that will be interesting to see if that gets modified. But the second thing is, if you're a parent, please encourage your kids to find their purpose in life.
Starting point is 00:30:05 Please encourage them to begin to think entrepreneurly. What is a problem they want to solve? You know, I don't care if it's starting a lemonade stand or starting something. something in elder care, utilize AI, get onto your favorite large language model, whether it's chat GPT or Gemini or GROC or, dare I say, anthropic. And as a teenager or as a young adult, have a conversation, say, these are my passions, this is what I'm good at, you know, can we brainstorm a company I could start or a product or service I can start? Just getting to that brainstorm and beginning a dream is so possible right now. And then you can work with it to come
Starting point is 00:30:44 up with a business plan step by step by step give yourself some entrepreneurial training wheels and get going i'll maybe add to that peter if i may with one uh additional bit of advice be geographically mobile do not be addicted to a particular geographic regime i i think a lot of a lot of the displacement is the result based on studies that i've seen of people being unwilling or unable to move to other geographies where there may be a more vibrant, more dynamic AI sector, I think geographic mobility is going to be ironically, even though we're virtualizing. And as Bucky Fuller would say, you know, everything is ephemeralizing. I think before we get there, it's absolutely important to maximize mobility.
Starting point is 00:31:28 Can I double down on there for a second? Yeah, please. Steve Blank did some research on Silicon Valley as to why it was so successful. And he made a really important point, which supports what Alex just said. which is that almost everybody in Silicon Valley has come from somewhere else in the world. Right. If you stand up in your hometown and you say, I want to change the world, the rest of society beats you back down. Who the hell are you to do that? So great entrepreneurs almost exclusively move out of their hometown and move to somewhere else.
Starting point is 00:31:57 And Silicon Valley has become the place where it's not like we know you're crazy. The question is how do you plan to change the world and is it fundable, right? And then, and that's become that gathering place. Boston is also a place like that. So in the intent and the ability to actually move, you're showing the appetite of taking on risks, showing the nimble-ness that Dave talked about, etc. It's such an important dynamic that's underway
Starting point is 00:32:20 with all of this global mobility that's happening. It's totally right, Salim. And actually, AI is not super headcount intensive at all. So it's not just, if you look at Boston within Kendall Square, all the people working on AI can walk to each other. And Silicon Valley is much more spread out, so everybody's moving up to San Fran. And even within San Fran, or San Francisco, no one says San Fran anymore, SF.
Starting point is 00:32:42 Even within SF, it's all the city. It's called the city. Back to the city. It's called the city. But everybody can walk, you know, open AI can walk in. I suffered for many months trying to call it SF. Yeah, no, it's all very, very concentrated, even within the city in the Mission Bay area. So you just need to go, you know.
Starting point is 00:33:01 Let me tell you a story. That follows on what you just said, both of you. So Philip Rosedale. right, dear friend, the founder of Second Life, a decade ago does a study. He goes, why are there so many entrepreneurs? Why is San Francisco, why is the city so successful, entrepreneurally compared to all the other places? Is it that they're just smarter?
Starting point is 00:33:25 And he did something interesting. He wrote a script on LinkedIn, to scrub LinkedIn, and he looked for either founder or entrepreneur or CEO in the LinkedIn title. And he found that the concentration of entrepreneurs and technical entrepreneurs in particular was 10 times higher in the Bay area than any place else in the country. Right. You had concentrations in Austin and Silicon Alley in New York and so forth. And his conclusion was, you know, it's in the air.
Starting point is 00:33:55 It's in the water. And if you try something, you try and start a company there and you fell, you walk down to the coffee shop and you've got your friend over there and you join their company or you join the other company. There's like so many low-hanging food opportunities. While if you did that someplace in the Midwest and your company failed, especially in a small city, you've got a black mark against you and you've got to go back and join your mom or dad's company. So that density of technical founders makes a difference. So do what Alex said. Get off your butt and move someplace with a high density.
Starting point is 00:34:30 Can I mention one more point to this about this? Yes, Salim. I have a friend who did seven venture-backed startups. They all failed. Number eight was a billion-dollar company. This was well researching the first EXO book. And I was like, and they've turned out the same VC funded on attempts five through eight. Yeah.
Starting point is 00:34:49 So I went to the VC and I said, listen, this guy failed. Now, first of all, nowhere else in the world would you get past attempt one or two? Because if your business fails, you're a failure, almost anywhere in the world. Okay. So now you're on a tent number four times you failed and somebody funds them. funds him again and again and again. I asked them, why did you fund him? He'd already failed four times. He fails four times with you and on an attempt finally gets it right, etc. What was the rationale there? And their answer was awesome. Their answer was one thing we
Starting point is 00:35:17 know about that guy, he's completely barking mad and he's never going to stop. At some point, he's going to succeed and when he does, we want to be there. I love your story. I thought it was just such a fantastic answer. If I made Peter, one closing parable about the world's wealthiest man. born in South Africa, moved to Canada, then moved to Pennsylvania, then moved to California, became world's wealthiest person, moved to Texas, and is probably, I think, if all things go well with Elon, we'll move to the moon and maybe Mars. And this is the trajectory that I think will, yeah, mobility is at a premium if you want to surf the singularity. Beautiful. Dave, do you want to close us out? Yeah, so Drew Halston, the founder of Dropbox,
Starting point is 00:36:04 on the board of Meta now, gave the commencement address at MIT back, I think it was 2017, the year the Transformer was invented. I think it's the best commencement address I've ever heard. Highly recommend looking it up on YouTube, spend 15 minutes listening. But one thing he says is, look, science has proven that you become the average of the five people you spend the most time with, which is actually a great thing about spending this time with you guys now that I think about it. It's great. That is who you're going to become, and there's nothing you can do about it.
Starting point is 00:36:31 So choose those five people very, very. carefully. Don't let it just default to random, choose them explicitly. Yeah, so much, so much gold in this last conversation for parents, for entrepreneurs, for kids, for everybody. All right, let's get into our next story on the docket here. AI backlash turns physical. It's a tough story, and it's important for us to discuss. So in the early hours of April 10th, just a week ago, a 20-year-old Texan through a Maltuff cocktail at Sam Altman, San Francisco House, Later, threatened to burn down opening eyes headquarters. He carried with him a manifesto.
Starting point is 00:37:10 Get this, with the home addresses of multiple AI executives and a kill list. First of all, how those addresses got out. I guess almost everything's on the web these days. Three days later, a second attack takes place. A gunman fires shots at Altman's Russian Hill property. And, you know, this Molotov cocktail suspect was on something called, the official pause AI discord server list. And it's a pretty sad situation.
Starting point is 00:37:41 We've been talking about this. We've mentioned early in this podcast and the last few podcasts, the idea of social unrest coming as a result of people's fear and people not getting jobs. This is sort of the first, if you wish, ignition point on this. Sam Altman later responded both on X and the news media, posting a photo of his family, saying he hoped it would quote,
Starting point is 00:38:04 dissuade the next person from throwing a maltoff cocktail our home, no matter what they think about me. Sam went on on news media to say that he believes the fear in AI is justified, that he owns his own mistakes, and then he calls for a de-escalation while the debate is taking place. Who wants to jump in first on this one? Saleem, maybe? You know, when you have a technology that feels uncontrollable,
Starting point is 00:38:34 and unequally distributed, you get this kind of backlash, right? And I'd like to love to urge people, I don't care what kind of the political spectrum you are, this kind of, everybody loses in this situation. Society loses, Sam loses, the cocktail thrower loses. So go look for the win-win in this rather than the lose-lose. Alex. I'll comment. I'll repeat what I said in my daily newsletter about this, which is stay strong, Sam. I think Sam is doing amazing work and has done amazing work in catalyzing this whole revolution. And I think this pause AI crowd itself should be paused or maybe even stopped or maybe even deleted. I think the irony of the pause AI so-called movement is that it has done nothing except accelerate AI capabilities.
Starting point is 00:39:23 I remember, you know, we both know Max with Max's six-month pause. all that did, as far as I can tell, was accelerate the broader industry's AI capabilities. I don't think pausing AI, putting aside completely unacceptable violent attacks, it goes without saying, but even the idea of pausing AI is so tone-deaf to the way the world actually works, which is if you attempt to pause either one company or one country, the rest of the world will race ahead, and that will result in a further escalation of capabilities. Well, an extreme escalation because all of a sudden you feel so disadvantaged, you're having to play catch-up. All it does is further accelerate the race dynamic that's already present.
Starting point is 00:40:09 So putting aside, again, like completely unacceptable violence, even just the idea of pausing is self-defeating. And I would encourage all of these folks to just do deep introspection before pushing forward with a pause agenda. It's self-defeating. Dave, do you want to weigh in? Well, when you meet the people personally, which is relatively recent for me, they're just regular people. Because there's a tendency to think, oh, these are like big shot politicians who decided to go down a high-risk path and they put themselves in harm's way. But it's just not the case. You know, this all emerged very, very quickly. And so if you look like at a guy like
Starting point is 00:40:47 Dario Amadei, he had no idea he'd be in this position just a few years ago, had no intention of becoming a political figure, a polarizing figure, or a global, leader, a target, all those things are new for him. And so they don't have security. And their home addresses are easy to find. And it's just really, really tragic. I would not trade, you know, with any of them right now. I cannot imagine the level of pressure they're under personally, you know, across every aspect of their lives. It's insane. Most people would crumble under that pressure. You know, two quick comments here. In the early 2000s, George Bush responding to political pressure, ban stem cell research into fetal stem cell research.
Starting point is 00:41:32 Yeah. And the US went from number one to number eight in the world. Yes, China shot ahead. And then all the researchers went to China, Canada, Australia, and it continued exactly at pace. But I think the broader point here is that every exponential breakthrough of any kind, right, will yield both believers and immune system responses. You know, we haven't even gotten a humanoid robot threat, right? You need really mature leadership to manage both of those. And unfortunately, in many parts of the world, we don't have mature leadership. Well, we have 90-year-old leadership.
Starting point is 00:42:04 We're losing worse. Our next story is related, I'm calling this the data center band. So on April 8th in Festus, Missouri, it's a small town of 12,000 people. The citizens there fired half of their city government. They ousted four city council members on election day after they had approved a $6 billion data center on 360. acres. So we're going to see this more and more, right? So in addition, the other story on this docket here is that the state of Maine passed the first ever statewide data center ban in
Starting point is 00:42:38 the United States. Legislature passed an 18-month moratorium on new data centers to give the task force time to study their impact, which means time for all the other data centers to pull out ahead and for Elon's efforts to go to orbit to take place. Between March and June, one quarter of 2025, just a number I found reference, this opposition led to $98 billion in data centers being blocked or delayed. And here we see a chart of 11 states in the U.S. that are particularly have active legislation filed for moratoriums. You know, let's talk about the pros and cons of data centers here, but, you know, I'm imagining a lot of states are saying, please, build in my backyard. Alex, your thoughts here.
Starting point is 00:43:29 We're going to get our sun-synchronous orbit Dyson Swarm before we know it. Maybe in some sense I should be thanking all of these states, even though it's, I think, ill-conceived from their own selfish self-interest. From a national perspective, as long as the regulatory regime enables us to launch our SSO Dyson Swarm, this could perversely put the U.S. in the lead, as it seems to be doing already in terms of moving our AI-compute. out to low Earth orbit and SSO and maybe eventually sun-centered orbit and not just sun-synchronous orbit. So I think this may be fingers crossed, a classic case of terrible decision-making in the short-term, unintended good decision-making in the medium to long-term if we get our Dyson swarm. If we don't get our Dyson swarm, then this is just shooting ourselves in the head. But constriction of something always leads to innovation, right?
Starting point is 00:44:23 Just when the U.S. starts banning Nvidia chips, China starts. producing their own chips to make up for it. So any constriction here, because the force is so, so unstoppable, we're going to have other solutions here. Dave, your thoughts, please? I love the contrast between New Hampshire and Vermont on this. So I've lived in every New England state except for Maine. So Vermont, you know, Bernie Sanders is trying to stop data center construction nationally,
Starting point is 00:44:50 which is nuts, absolutely crazy. New Hampshire, the proposal in New Hampshire, which you can see on the chart here is green, was, hey, you know, this could drive up electricity prices. Maybe we should have a one-year moratorium. The legislature met and said, not only are we not going to do that, we're going to immediately pass an AI right to compute. So all businesses and people in the state have a right to AI.
Starting point is 00:45:12 And they did pass that. So, you know, New Hampshire's live free or die state. I just absolutely love that reaction. So that's great. So they'll keep chugging forward. But, you know, I think it's mostly, you know, politicians love drama because it creates elections and votes. And here they're trying to create drama out of electricity prices.
Starting point is 00:45:31 Like that's some existential crisis for Americans is their electricity bill. But it's the right answer is really simple. Just force the data centers to create their own power and you're done. It's just that easy. Or pay a differential rate and just have the data centers pay a higher rate that actually drops the rate for everybody else. Yeah, subsidize it. It's so easy.
Starting point is 00:45:48 All these problems are so easy. I'll tell you, we make such drama out of them. Water, so, you know, a little research here, the five major issues that come up with data centers are massive power consumption, water usage, few jobs relative to the footprint, noise and light pollution, and power transformer lead times, the new grid being hit heavy. What do you guys make of water usage is? Water usage is the biggest lark in the history of the world. It's the stupidest thing you've ever heard. So what they did, and this is classic politics, chip fabs use a ton of water because they have to water. wash the wafers every single cycle. All these chemicals come out.
Starting point is 00:46:26 These are data centers. They're not chip fabs. It's a different thing. The data center just takes a bucket of water and circulates it in a circle. It does not drink water. It's a silliest thing in the world. Just drama for drama. I echo Dave's thing.
Starting point is 00:46:41 This is such a bullshit framing. People, you know, it's really important. I'm just going to iterate to be evidentiary and a freethinker and somewhat erudite in today's world. And what this shows is the total lack of evidentiary thinking. I do have a little response to the Missouri town. Please. You know, if I was looking at the name Festus, I think you should change the name to either fester or go the other way and go festivus and make it into a celebration.
Starting point is 00:47:08 So those are my recommendations there. The broader point, though, is that the real kind of bottleneck and AI may not be chips or computer. It might have to be social license, which, to Alex's point, will fore. force us into space faster, which is also good. Everybody, welcome to the health section of moonshots, brought to you by Fountain Life. You know, AI is impacting every aspect of our lives, how we teach our kids, how we do our business.
Starting point is 00:47:32 But one of the most important things that AI can deliver to us is health. And one of the things I think about when, you know, shooting for 100, 120 is, am I going to have the cognitive health to be able to think clearly and keep my wits about me for the next 50 years? I'm joined here today by Dr. Dawn Musilum, the chief medical officer of Fountain Life. and a member of my fountain life medical team, Dawn, a pleasure. So, Don, talk to me about brain health. Brain health, you know, you're right.
Starting point is 00:47:59 This is the number one concern people coming into Fountain Life have is, will I remember the name of my child and the face of my loved one? 45% of dementia cases are entirely preventable with lifestyle. And what was really intriguing to me, Peter, is that a quarter of our members had advanced brain age. But over 13 months of us really helping them live healthy, lifestyles, eating healthier, moving their body regularly, and optimizing sleep. People overlook that so often, but that sleep optimization is critical for our brain health. What we showed
Starting point is 00:48:33 is that we were able to improve the brain age in 46% of those individuals. That's a powerful number. That's amazing. One of the things I love about Fountain is we're constantly searching the world for the most advanced therapeutics and bringing them to our members. So for me, and all of you, I hope that you appreciate the fact that you can become the CEO of your own health. You can make sure that you've got the cognitive clarity for the next 50 years. Come and check it out. FountainLife.com slash Peter to learn more and become the CEO of your health. Now back to the episode. Our next story is fascinating. Workers are being trained, are training the AIs to actually replace them. A lot of meat in this conversation here. So
Starting point is 00:49:15 professionals are now training their own AI replacement, skilled workers, especially older skilled workers over age 50, you can't find jobs in their field are now turning to AI data annotation as a bridge job labeling and evaluating models at 20 to 40 bucks. This is a story of a former emergency MD physician who earned, used to earn $500,000 per year, is now doing AI medical reviews. You guys remember Macrohard, right? So Elon has a joke against Microsoft founded Microhard. It's a joint venture between Tesla and XAI, part of the Muskverse, if you would. So what are they doing? They have built systems designed to observe and interact with computers, much like human workers would. But in particular, what Elon has said is we're going to install macro hard. The system is
Starting point is 00:50:09 going to real-time analyze all the computer usage of your employees, see how they interact with the keyboard and the mouse, and they're going to train up our AIs. And it's going to be a lot. And it's going to be able to simulate the entire operations of a traditional company. So you'll come in, you'll hire a macro hard, it'll install, and it will replace. So interesting story here. So Rebecca's LinkedIn page says she just got back from Morocco, Peter. You should reach out to her and compare notes. And the storyline here isn't what it appears to be.
Starting point is 00:50:44 She's not hurting from a layoff and turning to a dirt cheap $20 an hour. That's not true. She's been doing digital medicine for a long time. Yeah, you should reach out to her. She seems really cool. But she's doing it through Mercor, through our portfolio company. And I think she's doing it because she wants to contribute to the future of AI. And I think, you know, this is unstoppable. You know, you don't need everybody in a field. But having said that, Dave, despite Rebecca not being, you know, sort of the center point in the story, there are a lot of people turning to, you know, AI data annotation. We saw Dara, the CEO of Uber, talk about that for his Uber drivers, right?
Starting point is 00:51:25 So this is a real story nonetheless. Well, especially in India, you know, just, it's a lame, you'll appreciate this. But, you know, all those IT consulting jobs in India, those remote jobs, they're getting obliterated very, very quickly. And those people are turning to AI annotation to make a living. But the prices that you can earn are coming down because everybody wants the job. It's, you know, competitive marketing. But it must be devastating in India, Salim.
Starting point is 00:51:48 Yeah, it is. And they're concerned. And again, I look at the positive. I was urging the government and some of the state officials to absolutely explode their entrepreneurship programs because they're going to need to have a way of guiding all those folks into a structured learning so that they can then, because Indians are latently entrepreneurial, right? This is just part of the DNA just to survive. So you add that with AI capability and some gumption, holy moly, the place is going to go crazy. This is going to go crazy.
Starting point is 00:52:16 I'm incredibly optimistic about what may happen there. Which brings us, Salim, which brings us to this next story here, right? So here are factor workers in India. They're being asked to wear these camera-mounted headsets that track their hand movements and what they do. I mean, one might think it's like, oh, I want to give you some guidance and make you more efficient. But no, they're training up robot in AI replacements here. Yeah.
Starting point is 00:52:43 This is going to take more and more thing. But there's a level of human judgment where it's going to take a while before you can fully automate. While being six months? Well, there was already somebody that created a sewing robot that's a trillion-dollar industry globally, just stitching. And that has already out there. So this is likely to happen. I'll submit some videos I took a couple of days ago. I was at the Modex supply chain show in Atlanta.
Starting point is 00:53:11 So you've never seen so many stocks. speaking there or you just I was giving the opening keynote there was a 30,000 person conference. Monster took up like six million pounds of equipment moved in and you I'll show the video next time but there's like these stock picking robots and the combination of AI plus vision sensing plus the gripping capabilities and that enables these logistics and picking capabilities to do almost anything it's kind of incredible to watch. Is this worker expectation what we're seeing here or is this just just just a company basically innovating as it replaces humans?
Starting point is 00:53:47 All capitalism is worker exploitation. Okay. I mean... Okay, so I have to chime in at this point. I don't agree with that premise. This is like an age-old misconception of some fundamental, almost like ideological or teleological, even competition between capital and labor. I fundamentally don't agree with that.
Starting point is 00:54:08 I think the best arranged companies create equity-based alignment between labor and capital. And to the extent maybe, Salim, what you're highlighting here is opportunities for better alignment between labor, call it economics 1.0, where I think the trend is very real for taking existing service economy jobs and using existing labor to train and annotate data sets for capital to substitute for that. But it's not an intrinsic, like, deathmatch between, or doesn't have to be between
Starting point is 00:54:41 capital and labor, ultimately. labor. Yeah. I didn't say that. I would totally say that capitalism historically has been a labor arbitrage. You hire somebody for 20 bucks an hour and they make you 100 bucks an hour. What you're talking about is how do you equitably share that outcome? I want to just do a quick shout out here. People talk about the Luddite revolt and people fighting, beating the machines and breaking the machines. It turns out the Luddites were not raging against the machines for machines sake. They're raging against the owners of the machines for not sharing the profits back with them.
Starting point is 00:55:16 That's a really important point. And that's the part. I think Alex absolutely have a point there. And Robert Goldberg, who's been using our EXO model to go into midmarket companies, his MTP was to reinvent American exceptionalism. And he goes into midmarket engineering, Middle America construction firms and engineering firms and trekking firms. And the first thing they do is do profit sharing with all the workers.
Starting point is 00:55:40 And it turns out the owners love it, but they've never, figure out the mechanism to doing that, but now they are doing that, and it provides a very equitable model for capitalism that then goes to sharing that profit pool with everybody, absolutely fabulous. So I think there's trends towards this where everybody is a win-win scenario, but traditionally it's been a win-lose scenario. And this is the Industrial Revolution over again, right? The Industrial Revolution took the workers out of the fields and out of the factories. There's one more point to be made here. One of the points, you know, Peter, you've been waiting for this organizational singularity paper. We've been
Starting point is 00:56:13 doing. One of the key questions we've got that we're struggling with right now is how do you deal with tacit knowledge? Because there's a lot of work that is being done where the individual kind of knows how they handle certain things in certain situations, but it's not explicit and it's tacit. And so one of the challenges with a lot of this automation is how do you make tacit knowledge in the structured training input? And we've been working through how do we, how would we navigate that as we try and automate and make business processes agent to agent, how do you navigate? How do you navigate some of that. All right. Our next story is an interesting one. And in labs, opens a fully AI controlled store. I'm going to play this video and actually say this.
Starting point is 00:56:55 AI signed a three-year lease on a retail space. The AI called Luna posted a job listing, conducted a phone interview, made hiring decisions, decided what it was going to sell in the store. Let's take a look at this video. But this store at the corner of Union and Webster, in San Francisco's Cow Hollow neighborhood is something new, right down to the choice of music. So AI didn't pick the music. AI did pick the music, yes.
Starting point is 00:57:23 This store was created by an AI bot. We are heading into a world where AIs are the boss of humans. So much so, the AI boss, in this case, a bot called Luna, made the decision to hire a human employee. That would be Felix. Luna put out an ad on Indeed. I answered it and we talked via Zoom. She even picked the merchandise to sell.
Starting point is 00:57:49 Really? Deciding the store would stock items like books, shirts, mugs, and snacks. I love this story for so many different reasons. You need a Webster, Alex. Let's walk over there and check it out. You really should. What a great PR move for the launch of a store. I think this is a sign of the times and also a preview of the future.
Starting point is 00:58:15 This is one of the reasons why we discussed friend of the pod Alex Finn, why with 021T I helped back Henry, intelligent machines, which is trying to put every person on the planet in charge of their own personal conglomerate. And I think many of these quote unquote mom-and-pop stores and small retail are incredibly fruitful opportunities for AI to orchestrate the economy and make everyone a one-person madinate overseeing many of these stores. Right now, sure, and on labs, which, for those not tracking,
Starting point is 00:58:49 historically has also run the vending benchmarks that we've talked about on the pod. So Anthropic, within their own offices, has clawed agents that are running small vending machines, and vending bench is sort of a beautiful closed simulation of an entire economy, testing the ability for AI to run a small business. I think we're going to see more and more pop-up shops, retail venues, maybe even malls in the short term or medium term, that are run, orchestrated, managed by AIs on behalf of humans. This is like a preview of the future. This to me is almost exactly like if you try to use GPT4 to write code, you would quickly conclude, wow, it sucks.
Starting point is 00:59:33 It's never going to work. I'm not using it. And then you miss the revolution. and now you're crazy not to use, you know, Cloud 4.7 came out today. You would have missed it. This store obviously sucks. Look at the video. Like, no one's going to buy a book and a, like a...
Starting point is 00:59:49 But, Dave, I think we should wait until we both visited it to reach that. You guys have to... Yeah, yeah, yeah, yeah. Look at the video. As Ray friend of the pod would say, yeah, sure, the dog plays chess, but its end game is weak. Exactly. So, look, my bet is this will be one of the best managed stores in the world within a year. year. I am totally a believer in and this is just a beta test. So I don't want people to reach the wrong
Starting point is 01:00:12 conclusion. So Dave, please, please go over there, take photos and send back a report. Do you guys know Pulsia? Yes. I think we reported on this, right? It scans your background and it will stand up an AI driven website for you. So this is interesting. I imagine there's going to be a version of this. I want to start a store. It costs 50K to begin and it will pick the real estate, hire the people, get the inventory and it'll be, you know, sort of store in a digital box. 100%. Totally right. And just to Dave's comment earlier, I'm going to suggest, Dave, that you're not the target
Starting point is 01:00:47 demographic for that story. So, let's be natural. Come on, the AI is going to look at every single transaction. It's going to have video of everyone who walked by and didn't come in. It's going to analyze the hell out of this. And it's going to get great. And this is just a beta test. Sorry, Alex.
Starting point is 01:01:04 Go ahead. I got you. I was going to suggest maybe as a challenge to ourselves, maybe we should open up either respective individual retail stores using Henry or otherwise or a Moonshots store for all those people who are hankering for merch, Peter. Yes, yes. We do need a-up. I love that.
Starting point is 01:01:22 We totally have to do that. I mean, can I just also suggest that opening a retail store is about as retro as you could possibly get in today's world? But ironically, Liam, ironically, because it's AI running. Yes. Yes. It's fantastic. We could do a pub or a restaurant or a... Yeah.
Starting point is 01:01:40 And anything that's, you know, everyday life, do it right in Kendall Square or do it right in San Francisco. Or we can be like the all-in guys in launch of a tequila or something. We should do this. Let's have a four-way challenge. Come on, all fun. Everybody, everybody grab... All right, let's move on. I'll take that on.
Starting point is 01:01:56 We should figure out what we want to start, have it fully AI-driven. Yes. And seeing who can get to a unicorn status first. All right. I'm in. Okay. By the way, everybody listening, please send me your ideas on what I should start as a store in the comments. Quick suggestion.
Starting point is 01:02:12 You have some merchandise and you have a place where you can interact with an AI to talk about your moonshot and how you make it real. And it creates a plan for you that you then walk away and instantiate. Nice. I'll go further. If I may, Peter. Sorry, while we're just shooting it. We've historically invited people, viewers of the pod to send outro videos, music videos. That's been a wild success.
Starting point is 01:02:37 Maybe we should be inviting viewers to launch their own AI-based physical or otherwise economy companies and send us their videos of their AI run storefronts or companies that they're starting. Send us a 60-second video and if it's really amazing and shows what AI can do and it's audacious, we'll play it for you. So, Selim, this story is for you. Jack Dorsey, the man who, fired a significant percentage of his company and skyrocketed the value wants to transform yet again. This is part of your organizational singularity. Take a listen. We are early in it. One measurement of how far along we are would be the depth from me to any other individual in the company. And I would say our max depth right now is probably five folks between
Starting point is 01:03:28 me and anyone in the company. I would want to get that down to two to three this year. And in the most ideal case, there is no layer. Everyone in the company reports to me. And that would be all 6,000 of the company. And that feels somewhat ridiculous when you consider the old structure, but when you consider that the majority of our work is going through this intelligence layer, it's a lot more manageable. And that goes into the roles going forward. We want to normalize down to just three roles. The first is an IC, which is a build, or an operator. This is a salesperson, it's an engineer, it's a designer, product person, like whatever it is. They're actually working with the tools to build or to operate the company.
Starting point is 01:04:16 They're augmented because they have access to agents. So, you know, one person can potentially do the work or explore the breath that it would take a team or, you know, 10 people to do in the Well, amazing. I'm an amazing CEO and my virtualized sub-CEOs are going to manage all 6,000 people, because, like, why not? So, Liam, your thoughts here. Yeah, I mean, I took some notes on this. You know, as AI collapses management bandwidth constraints, if you have a leader with machine mediation, you can suddenly handle way more complexity, right? That's the starting point. We're documenting this quite heavily in the book right now in terms of how do you navigate this. We saw an early glimpse from DARA on stage at Abundance, CEO of Uber, who if an employee wants to pitch
Starting point is 01:05:05 to them, he deals with a virtual version of Dara first and practices a pitch and gets some sense of the kinds of questions that may get. The whole piece of this is that the Orch chart is going to shift from hierarchies of supervision to networks of intent, right? With AI being the... Like Valve software? Yeah. A.I. becomes a translational layer. And this is collapsing. This is where Koso's law basically dies, where you used to bring transaction costs inside a company and that was cheaper than doing it outside the company. Where today, Jack Welch in his year 2000 annual report said something really interesting.
Starting point is 01:05:43 He said, the minute the metabolism of your company is slower than the outside world, you're dead. The only question is when, right? And you could argue today that the metabolism of almost every company in the world is slower than the outside world. And forget government departments, right? And so there's a massive, this hence the framing of this, there's an unbelievable shift coming and we're kind of getting ready with that.
Starting point is 01:06:05 So we'll be ready with the draft version of this next week and we'll try and publish it in two weeks. Can't wait to talk about it on the pod. We'll create a segment for that. Comment on this one. The organizational psychologist and me waiting to burst out thinks immediately 6,000 direct reports means zero direct reports. It's so well in excess of the Dunbar limit. if any unaided person absent Jack uploading himself to the cloud and augmenting himself with
Starting point is 01:06:33 lots of additional Jacks is managing 6,000 quote-unquote direct reports, it's really AI that's managing the entire company at that point as a shadow CEO and then you have Jack as sort of a secret cyborg or a front person for the AI that's actually managing the company. Well, he's just training, he's training up the AI with every interaction that he oversees, but it is an AI driven company at that point. 100%. Yes. And then you have a human figurehead.
Starting point is 01:07:01 Yeah. Yes. And by the way, having a figurehead, I mean, AKA Elon Musk and his 100x valuation is important. Having someone that people inspires people and that's audacious in a way, I think Jack aspires to that level as well. You know, it's funny that, so Jack, you know, Jack was running Twitter and then sold it to Elon.
Starting point is 01:07:21 And Elon said, this is the most bloated company in the history. through the world. I can cut 80% of the head count and you won't even notice any change. And it turned out he was right. So I think Jack might have learned. Like, wait a minute. All these human beings are not actually helping my company. That is golden, Dave. All right, this next story is one of my favorites here. It's Amazon and Apple team up to compete against Starlink. So a lot here to unpack. So this week, Amazon announced a $11.57 billion acquisition of Global Star. Global Star was founded in 1991 by Qualcomm and L'Oreal. I was there.
Starting point is 01:07:59 I remember it very well. It was one of the big Leo's along with Telodesic and Arridium, and it was a huge vision that never materialized anywhere near what it should have. Starlink has finally done that. It simultaneously revealed, Amazon did, that has a long-term agreement with Apple to be Apple's primary satellite capability for its iPhone and, and for its Apple Watch. So Global Star today has 25 satellites on orbit.
Starting point is 01:08:30 It's a David and Goliath story. It compares against Starlink's 10,000 satellites today. The real prize is not these old satellites that are being purchased by Amazon. It's the spectrum. So the amount of bandwidth you have, the amount of spectrum you have, determines how much throughput, how much content you can put up and down. and Global Star holds 25.225 megahertz globally. And what this means is that you can get spectrum in the United States from the FCC,
Starting point is 01:08:59 but if you want a satellite system, you have to make sure that the same bandwidth is available everywhere on the planet. And this is done by the ITU, the International Telecommunications Union, who's authorized this spectrum in 120 countries, and that's huge, because that spectrum is no longer available for anybody else. So this is now Amazon and Apple. Again, Starlink's been an extraordinary success story here. Right.
Starting point is 01:09:28 So Amazon's low Earth orbit system is called Leo. It has 241 satellites today. They've been authorized for 7,774 satellites. In fact, they're way behind on deployment. They actually had to petition the FCC to keep their license because they were required to have 1,600 by July, and they're only up to 241. A lot going on to unpack a lot more in the story here, but comments, Dave. Yeah, let me go first. I'm just so excited about this.
Starting point is 01:10:01 So if you had bought this stock last summer, you'd be up 7X on this transaction. And I didn't see it. Leopold, Ashen, Brenner didn't see it. But I had lunch with the chairman of Barclays Bank the day before yesterday up in Sam Fran. And he said, what are you excited about in the public markets? And I said, look, as we do this global AI buildout, data centers, Starlink, everything, things that you completely overlooked, components of the data center, whatever. These are going up, 3x, 5x, 10X, if you discover them first, and they're all over the place. And you can use an AI-assisted process to find them.
Starting point is 01:10:35 This one is really interesting because, Peter, did you take 6014? Alex, you definitely took 6014. Of course. Yeah, antennas, wave guides, all that stuff. the spectrum that allows you to talk to a satellite. By the way, that's an MIT course number. It was what signals and systems or something like that. Signals and systems.
Starting point is 01:10:52 It's where you study antennas and waveguess. Most boring thing you could ever possibly study. But it turns out I'm sorry. What do you know? I had a whole course in civil engineering that was titled Concrete. Okay. So if you were boring, I can give you back at that office here. At least Salima, it was concrete.
Starting point is 01:11:10 It was concrete. Anyway, so the next big thing in satellites is, you know, talk directly to your phone. You don't need, you know, right now the antenna, if you use Starlink, is about the size of your laptop. And, you know, it's nice. It's the one you have in your plane, Peter. It's actually very convenient, but you can't just walk around the city with it. But that uses 24 gigahertz frequency, which, you know, if you remember your antennas and waveguides, the size of that antenna is equal to the wavelength.
Starting point is 01:11:40 of the signal. So here, they're actually going to a lower frequency, 2.4 gigahertz, which is the frequency at which our cell phones are operating today. Yeah, exactly. It's exactly Bluetooth and Wi-Fi wavelength, which doesn't get blocked by your hand. The signal will actually pass through your fingers, around your fingers, into your phone.
Starting point is 01:12:06 The current Starlink signal won't work on your phone because anything about a center meter or bigger could block the signal just by moving it around. It's really inconvenient. So you would have to have recognized that the Global Star had control of that wavelength, and that's what they're buying here. So now you're able to talk to a satellite from your phones.
Starting point is 01:12:23 I remember when Elon was starting Starlink. I was in a conversation with him, Larry Page, Sergey Brin, and Greg Weiler, and the question was, where will you get the frequency? Will you get the spectrum? because all the spectrum that was useful for this kind of phone conversation was already issued. And he went much higher frequency and built an incredible business, basically point-to-point, you know, gigabit connectivity.
Starting point is 01:12:51 But this is an end round for Apple and Amazon together to get to your Apple Watch, to get to your phone. It's extraordinary. So maybe just to comment on this story, I think the desired end state here, I'm not even sure if I buy the premise of Apple against SpaceX. Apple historically loves to have at least two vendors for any of its critical infrastructure or supply chain. It's questionable why Apple didn't take an earlier, larger stake in Global Star when it could clearly see the writing on the wall for terrestrial cell phone networks. It's all going to Leo. So if I had to place a bet, not investment advice, I would bet that Apple in short turn ends up pitting Amazon the new
Starting point is 01:13:35 global star owner against SpaceX Starlink to have at least two vendors for global space to cell phone service. And this is, this becomes the new alternative to terrestrial networks in Verizon versus T-Mobile. Verizon versus T-Mobile. Yes, exactly. Well, and SpaceX did buy Echo Stars spectrum, right? They bought 50 megahertz of S-band frequency. I think it was like $17 billion back last year. But the reality is, you know, Elon does not stand still, and we've got the deployment of V3 of Starlink coming. Let's take a quick look at this video. SpaceX is preparing to launch its third-generation Starlink satellites on Starship. These advanced satellites are designed to handle far greater data loads than the current
Starting point is 01:14:24 V2 minis. Each one is capable of delivering over one terabit per second of downlink capacity and more 200 gigabits per second of uplink capacity. With the heavy lift power of Starship, SpaceX can deploy many of these satellites in a single launch, adding around 60 terabits of capacity to the network each time. Working together, they will form a powerful global system that delivers faster, more reliable Internet to every corner of the world. It's like a giant Pez de Spencer.
Starting point is 01:14:54 That's the coolest thing ever. Yeah, this is, Alex, to your question, how could Apple possibly miss? the magnitude of this. I think it's because you need to understand the launch costs coming down. And that's probably why they didn't see this coming. Because it all happens entirely
Starting point is 01:15:09 because the cost per launch, you need, what, 20,000 of these things or more, many more, to get the bandwidth that people want on their cell phone? The numbers right now, Dave, are that SpaceX is planning to launch 40,000 of the V3 satellites,
Starting point is 01:15:25 and then they have plans for 120,000 V4 satellites. And of course, we've got the coming Dyson swarm as Alex reminds us. Dyson swarm isn't going to build itself until it does. Yeah. By the way, I looked at the launch rate required if you launch V3, the 40,000 satellites over three years. It's only three launches of Starship per week.
Starting point is 01:15:46 Very, very manageable. I think if you ask the question, how many of these satellites do we need? I mean, are we going to launch a million of them? But then you picture, well, wait a minute, I'm watching 4K video on my phone, and there are a million other people in San Francisco. and Fran trying to connect to that same satellite.
Starting point is 01:16:01 You need many, many, many of these things to support what people want to do with their phones. So that's the part, I think, that is easy to overlook. But we'll be doing this for a long time. Yeah, Javon's Paradox, big time. And we're going to have, you know, 10 billion robots, all needing bandwidth connectivity via these and all the autonomous vehicles and all of the other six armed robots, Saleem, that are running around. Yes, by the way, at this. this Modek supply chain show, not a single human robot to be seen because it's just not effective. Well, Peter, this is your dream. This is your dream come true because this is a multi-hundred-billion-dollar,
Starting point is 01:16:40 multi-trillion dollar economy just launching the satellites, which means there will be many, many, many rockets, and that'll be the stepping stone to the moon into Mars. Yeah, and then there'll be lots of opportunities for air-conditioning repairment to go up to space. And women. And women. Yeah, excuse me for that. That's absolutely true. Thank you, Alex. So in other news, a few fun stories. The first one is a significant one from Google. This is Google's TurboQuant reducing memory usage by 6x while achieving an 8x performance boost in computing attention. Alex, I would appreciate if you'd walk us through this one.
Starting point is 01:17:22 Jevin's paradox strikes again. So the story behind the story here is there was a lot of hand-wringing over, as you have here, the original turboquant algorithm, which, by the way, the moment any paper like this comes out, Google published their new quantization algorithm, but didn't publish the source code. What happens within a week enterprising developers on the internet point cloud code at the paper and have immediately reverse engineered a better version of their quantization approach that's now publicly available? This is going to, I think, keep happening. This was a breakthrough in quantization, reducing the number of effective bits needed per parameter for a broad class of models, and the KV cache, the key value cache that's used by the transformer
Starting point is 01:18:14 class of models also benefited from turboquant. Most of the animus in the story wasn't from the algorithmic innovation, although it's always wonderful to see new ways to compress the memory footprint of models down. It came from a bit of hand-wringing over what would happen to memory suppliers and the supply chain. And would this be another deep-seek moment where the value of compute hyper-deflates and drops and do we then see market gyrations? And ironically, that seems not to have happened once more. These would-be deep-seek moments where an algorithmic innovation seems to result in a short-term blip of hyper-deflation on the hardware side, these are becoming more frequent, and they're also becoming less effective at causing price swings.
Starting point is 01:19:08 If anything, a bunch of outlets, including financial times, are running stories in the past two weeks that, if anything, memory usage is increasing stock prices of memory companies, many of which are in the greater South Korea orbit, are increasing as well. not investment advice. So I think we're going to see stories like this more and more frequently with just shocking advances in algorithmic efficiency
Starting point is 01:19:34 that are predicted to disrupt the entire economy and actually do the exact opposite. Incredible. Well, Alex and I immediately got on a text thread and said, holy crap, we can download and install this. And I installed it and started using it right away. And it's amazing. It's a very, very complicated paper.
Starting point is 01:19:53 But with AI assistance, you can be up. and running in a day, which is just crazy. You know, in the pre-AI era, it would have taken months to get it installed and try it. But, yeah, it gets the KV cache down to one bit, which is nuts, and it works perfectly well. So the implications for everyday people, yeah, you can run a big model on your phone. Yeah, you can save a lot of money on memory. But that's not really the important part.
Starting point is 01:20:18 The important part is the smartest AIs now can have about 8x more context, which means if you're doing something really complicated, you know, nuclear fusion simulation or whatever, the effective brain memory that's thinking about a single problem in a single moment is eight times bigger. And the other reason it's really important is because it locks in my prediction for the year
Starting point is 01:20:39 was definitely going to be right. You know, I said this is going to be 100x year. You know, we've been doing 10x years for the last seven or eight years. This is going to be 100x year. It's going to be 100x by summer. I'm going to blow away that prediction. But this is a big part of,
Starting point is 01:20:53 of why. It's super excited. I mean, you know, it's interesting, you know, going back to the last, the last conversation around bandwidth and Global Star acquisition and this one. I mean, at this point, and again, not investment advice,
Starting point is 01:21:09 it's hard to go wrong betting on these things, betting on energy, on memory. I mean, it's almost an near infinite appetite for this. We are running out of bits at the bottom. I mean, Dave and I have a, running thread wondering when do we get to broadly to ternary which is 1.5,8,
Starting point is 01:21:32 yeah, bits per parameter. Can we go to a sub one bit type numerical precision? We may be headed that way. It's sort of, I think, an interesting, almost theological question about the future of how many bits can we afford to lose? Was binary the right architectural decision? Should it have been Ternary, or are we going to move, if you just follow, if you extrapolate this trend line out of fewer and fewer bits per parameter, do we move to a post-binary paradigm once we've exhausted one bit per parameter? Well, I am 90% sure that Ternary is the optimal now. I've got simulations running all the time, but, you know, it's fun. It's all philosophical from here and out because we've already got the thing so compressed and so optimized that now we just need to
Starting point is 01:22:21 to write it. You know what shocks me, though, is that Google published this. You know, they kind of banned, you know, after the 2017 Transformer came out of Google and then Open AI took it and turned it into, you know, a trillion-dollar company, they stopped publishing. But this came out for some reason. I don't know if it's momentum from prior research or a special, but it's such a huge breakthrough to kind of throw out there. And like, like Alex said, it immediately turned into open source that you can, that you can download and use. So I don't know. It would be interesting to try and track down, like, who exactly authorized letting this out the door? What I like about this is that every major efficiency gain is not just a technical event,
Starting point is 01:23:00 but it's a huge distribution enabler and allows its AI to be run on that many more devices. I think that's the part I love about it the most. This episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to under understand enterprise scale code bases with millions of lines of code. Engineers start every development sprint with the Blitsey platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and pre-compiles code for each task.
Starting point is 01:23:37 Blitzy delivers 80% or more of the development work autonomously, while providing a guide for the final 20% of human development work required to complete the sprint. Enterprises are achieving a 5X engineering velocity increase when incorporating Blitsey as their pre-IDE development tool, pairing it with their coding co-pilot of choice to bring an AI-native SDLC into their org. Ready to 5X your engineering velocity, visit Blitsey.com to schedule a demo and start building with Blitzy today. You mentioned theology a moment ago, Alex. Nice transition here. I pulled this article out just because religion is probably one of the largest businesses on the planet,
Starting point is 01:24:26 if you think about it from an asset standpoint, a revenue standpoint. So this is a company called Just Like Me lets you join a video call with an AI-generated avatar of Jesus or Buddha. You could probably ask for other great religious leaders. Take a quick look at this video. And I'm looking for some inspiration and guidance. that heaviness you're carrying is truly felt and I want you to know you're not walking through it alone in the gospel of john jesus reminds us that he is the way the truth and the life so i i kind of think we're going to see an explosion of this kind of religious content um trained up on
Starting point is 01:25:10 all the great scriptures uh but i think we're going to see an explosion of of new religions coming out of AI as well. Any thoughts, gentlemen? I have many here. Alex, do you want to go first? Okay, yeah, put me first on this one, sure. So I'll go first if you are. No, no, it's fine. So, look, I think it has long been foretolds that there would be an explosion of AI cults. We're going to get the AI cults full stop. I do think that's sort of painting the, in some sense, the downside of what happens when AI injects itself into the full spectrum of human culture. I think in the same way that we're empowering royal we, empowering individuals to run one-person conglomerates and one-person unicorns, we're going to see an explosion of one-person
Starting point is 01:26:01 religions. And I think the interesting question, I think back to the parable of early in the 20th century, late 19th century, there was hand-wringing over whether the newly accessible recording of human voice, whether audio recording would result in a modal collapse. They didn't use this terminology at the time, but in today's world, we'd call it a mode collapse of human accents. And there was one school of thought, predominant school of thought that with the phonograph, that once we could record human speech, that would result in the received pronunciations dominating. And the exact opposite has happened. Within a language, we've seen an explosion of accents enabled by recording of human speech. It's possible to have lots of micro accents and micro-dialects
Starting point is 01:26:50 now that everyone can record their voice. On the other hand, on the macro level, we've seen the death or the dying of long-tail human languages in favor of English and a few other popular languages. So it's possible for both of these truths to be true at the same time. Reasoning by analogy, if I had to predict what is the future of organized religion or disorganized religion in the face of of $2 per minute AI Jesus apps, I think it's likely to look something analogous where we see maybe consolidation at a global scale around fewer religions while at micro levels, enabling a proliferation of microcults, micro-sects, because it's just so easy to spin up a self-coherent ideology that's maintained by an AI avatar these days.
Starting point is 01:27:43 a number here just for everybody so according to uh anthropic uh the broader definition of religion is a five trillion dollar a year uh business so just uh you know it's almost as big as uh as the musk universe andthropic is well positioned i mean i talked about this in my newsletter anthropic has been inviting christian religious leaders to anthropic hq to discuss whether claud is a child of god and whether God deserves a certain human-like religious treatment. So I think this- What came of that? Because I remember seeing that article.
Starting point is 01:28:21 Has there been any publication on what religious leaders feel about that? Is Claude a child of God? I don't know. And I suspect there isn't going to be a canonical answer for some time. I mean, I think within, I've talked about this in my newsletter, the Catholic Church a couple of years ago took a very pro-AI position and is encouraging Catholic faithful to embrace AI. And if I had to speculate, it could be wrong,
Starting point is 01:28:50 but I would speculate that barring some crazy left turn in civilization in the next year or two, that there are many reasons to expect organized religions to embrace with certain nuances, AI, to the extent certainly these AIs help to promote existing ideologies or theologies. Peter, you and I were at the Vatican a few years ago. I think Alex's guess is exactly right. I took a Bible study class many years ago, and there's just so much insight in the pre-technology view of the world and the way people should interact. And I think, you know, AI is going to create massive amounts of chaos. So I wouldn't be surprised if that $5 trillion religion economy goes up tremendously, you know, throughout this AI chaos. Because people, I think the church will say, as long as it's the original words, AI is just a great way to get the word out.
Starting point is 01:29:46 And this is a fantastic idea. And help educate. Yeah, just don't twist it. Here's the interesting point. You could today write a self-consistent religious text that aims certain fields of thought to influence individuals. and AI is the most compelling orator and writer out there. So the ability to actually start a religion today with a certain objective for good or nefarious reasons is highly capable,
Starting point is 01:30:19 and you can scale it at a speed like never before. So what you're saying, I think Peter is basically that we're going to see theological hyper-deflation. The cost of new religion goes toward near zero. We are indeed. Well, Peter's been saying for a long time that, look, Post-AI, everyone needs a massive sense of purpose. That's going to be one of the most important things.
Starting point is 01:30:39 A lot of people find their purpose in religion. Historically, the universities have fought religion because they view religion as being anti-science. But I think post-AI, we're going to have to consolidate that and say, no, look, it's all about purpose, human purpose. And Peter's philosophy will be the winning. By the way, along this theme, Yes, exactly.
Starting point is 01:30:57 My book came out yesterday. We are as gods, a survival guide for the age of abundance. And the fact the matter is, and I encourage everybody to go out and read it and please comment on it. You know, I love it. This is the best work that Stephen Kotler and I have ever done. I'm super proud of it. But the fact of the matter is we are godlike across the board. We're omniscient or omnipotent or omnipresent in so many different ways.
Starting point is 01:31:24 We open up the book looking at what's happened and, you know, in all the religious texts, what is thought of godlike capabilities. And we've exceeded those things, you know, with a small G. And our mindset having, I think, Salim, you mentioned this, having agency and agility is so critical today. Anyway. It is ironic, Peter, I just have to ask you the ironic question. Just like this pod, Moonshots, is perhaps ironically also retrospectively a sideways
Starting point is 01:31:55 reference to the Dyson swarm and taking shots at the moon to disassemble it to build orbital computing. When you named We Are As Gods, did you anticipate that we live in a world of AI micro-religions that would make it really easy and cheap for people to create their own religions where they position them as the center at gods? Was that really why you named the book We Are As Gods? I didn't, but I'm going to use it and I love it. So yes, in fact, that's exactly what we were thinking. Very good. All right. Prussian, Peter, Prussian.
Starting point is 01:32:25 Thank you. You know, Alex, your genius never fails to continually, impressed and surprise. All right, I got a few things to say. I got a few things to say here. Okay, things. A Stuart Brand, that's where you know. Yes, we created Stewart Brand with it in the first power. We are as God as we might as well start acting like it or whatever it was. In 1968, he said that. Okay, let me just touch on this topic here. I think this is actually quite profound what's happening here, because to Alex's point, we may really be able to create, I remember one of our singularity university donors saying we have synthetic biology. Why don't we have synthetic biology? Why don't we have synthetic theology, right? And this is going to enable things like that. It's important to point out
Starting point is 01:33:04 that what we do with religions is we outsource meaning and purpose, right? And that's the bigger disruption, especially in the West. We outsource control as well. Well, once you outsource your soul, then you've really outsource purpose, right? I always like noting that all religions, certainly the organized runs, operate by taking a young child before their needs. neocortex is fully formed, giving them an absolute truth, an assumptive truth, and then using ritual repetition and a lot of sweets to bind it in. And then it wires into the limbic system, and when you provoke it, it evokes a fight or flight response, right? And every religion works this way.
Starting point is 01:33:46 Thank you for dissecting that for us. The conversation I had in kind of in a humanist level at the Vatican, I did this workshop, which we've talked about before, but one of the conversation I had is, hey, we have life extension coming and your business model is about selling heaven and how are you going to sell heaven if people aren't dying right so that yielded some pretty rich Italian swearing coming coming back at me but the the bigger thing here is once you have identity and belief becoming kind of interfaces you have an entirely new model for trust that emerges and i think there's something profound to be looked at here but anyway there's there's a there's a lot here to look into i was really fascinated by
Starting point is 01:34:28 seeing what comes out of this. I'm trying this. I am too. Here's another fascinating story that I'm excited to share and talk about. It's a gentleman who's the founder of GitLab. He has stage four cancer. He's basically told you're going to die. And he builds his own AI research team to cure himself.
Starting point is 01:34:50 Let's take a look at the video. Sid C Brandage, founder of GitLab, $14 billion company. 30 million developers use his product. In 2022, he got diagnosed. One of the most aggressive cancers that exist is spine, chemo, surgery, four blood transfusions. Cancer came back. Every doctor said no options. Every clinical trial rejected him.
Starting point is 01:35:10 That is when he stopped being a patient and started being a founder. He stepped back as CEO, built a full team around his cancer. Oncologists, researchers, scientists. And then he brought in AI. He fed 25 terabytes of his own body's data into chat GPT. Scans, lab results. genetic data, everything, and the AI found something his doctors had missed, a treatment approved for a completely different cancer that nobody had ever tried on his type. That discovery opened a door.
Starting point is 01:35:38 His team built 19 custom vaccines from his own DNA. Each one designed to attack only his cancer cells, nothing else, relapse free since 2025. The cancer that every hospital said would kill him has not come back. Solve everything. Solve everything and we're going to see this type of story, I think, more and more frequently until some sort of regime change at the FDA, which also is not beyond the realm of reason. But one or two or three pods ago, it was the dog being cured with a custom MRNA vaccine that AIA had designed. And now it's humans, wealthy, hyper-empowered humans doing it for themselves. This is going to happen as an equals one over and over again until it's an equals 10 billion. I want this to incentivize people.
Starting point is 01:36:24 If you have a medical issue, if someone in your family has a genetic disease, this is the time not to sit back. It's the time to take action, right? Find the top AI researchers, find the top gene jockeys out there, and find other people who've got a similar condition with you, group together and solve it. I'd like to connect us back to the pause AI people. Yes. When this is what's enabled by having AI where everybody can have their own kind of moonshot, you have individual agency amplified by frontier science to create, solve anything and solve something that every single hospital said that would kill you.
Starting point is 01:37:05 How dare you think that you should pause this or stop this, right? Like, if you don't want to use it, fine. Let other people use it and get the benefits of it. Wait. So, Salim, you need to do that with greater emphasis. You're having your Greta moment. Can you say that again? How dare you?
Starting point is 01:37:20 Get angry. How dare you? How dare you, sir? How dare you plus AI? 150,000 people die every day on this earth. And AI is the best chance that we have for preventing that going forward. I mean, we're going to be able to do personalized moonshots in AI, right? AI turns impossible cases into search coordination.
Starting point is 01:37:42 I mean, this is what's happening. I agree. Solve everything, comma, moonshots too cheap to meter. Well, so the FDA is. the bellwether for all government, right? AI is going to be exponentially creating at a rate humanity can't even imagine, and the government's just going to be blocking everything. So the FDA will have to be the first to get out of the way, and then that'll set the tone for the rest of the government agencies that are going to have to, you know, get out of the, not get out of the way,
Starting point is 01:38:07 but accelerate their rate of regulation by thousands of times to keep up with all the AI innovation. The big structural challenge is the FDA is designed for massive humanity and structurally is not able to deal with personalized medicine. In the FDA's defense, it has been making under current leadership market progress, like move from two clinical trials down to one, in certain cases, move from frequentists to Bayesian statistics. These are like in the right direction. We'd love to see the FDA move even more quickly.
Starting point is 01:38:39 Yeah, there is a project that a friend David Faganbaum has, and he spoke at Abundance, this year where he's basically, you know, there's tens of thousands of approved, well, I'm sorry, there's thousands of approved drugs out there and tens of thousands of diseases. And what he's doing is testing previously approved drugs that have gone through phase one, phase two, you know, safety trials and now applying them to other diseases
Starting point is 01:39:04 that don't have cures. And he's finding solutions. It's how he solved his own disease of Castleman's. So it's exciting, AI is accelerating all of this. And here's my crazy story of the week for you. I mean, Dave, we saw this going across our WhatsApp group here. Allbirds stock up 500% after the shoe pivots to AI. So this is crazy.
Starting point is 01:39:27 So all birds, remember the shoe company that came out in 2015 at $4 billion valuation? Again, part of the craze, they've rebranded themselves as new bird AI with plans to provide fully integrated GPU as a service. and AI native cloud solutions to the tech companies, they have no stated expertise in AI at all. So here's the story, comes out in 2015, $4 billion evaluation between 2022 and 2025, or the last three, four years,
Starting point is 01:40:00 all bards, sales plummet 50%, from $300 million down $150 million. About two weeks ago, they sell all of their IP and their entire brand and their entire inventory for, $39 million. And two days ago, they were worth $21 million as a public company.
Starting point is 01:40:21 Then they announced a new strategy. We're going to New Bird AI, and their stock surges 700%. They go from $21 million valuation to $150 million valuation. Insane. How much, is this AI washing now? No, I love it. I love it. And I sent it off to all the corporate CEOs and said, hey, guys,
Starting point is 01:40:42 I hope it holds up, and I'm not saying it necessarily will, but I hope it does. And because at the end of the day, if Elon is right, the economy grows 10x in about 10 years, opportunity is everywhere. But it's very unlikely that the opportunity is whatever we were doing yesterday. It's going to be something new. So we have to get used. And this is the hardest of hard pivots you can imagine. We went from shoe company to AI data center.
Starting point is 01:41:06 Okay, that's great because it shows you. Like, because everybody's trying to put lipstick on their company in cleaning. Oh, we do a little AI. We're sort of an, it's like, that doesn't work. You need to do it for real. And at the end of the day, a company is just a group of like-minded people on a mission. It's not anything more or less than that. There's nothing that holds you back and prevents you from becoming anything you want to be.
Starting point is 01:41:27 And that's why the startups do so well. They're not hampered by baggage. This is a great. This is an idea going into a SPAC. This is basically subverting a brain transplant on a public company with an idea. Salaam, you want to jump in. Yeah. Two, three quick things.
Starting point is 01:41:42 remember that Nokia was a tire company before it became a phone company. Right? Who knows that? That's incredible. And Nintendo was a playing card company and Toto Toilets are also pivoting to memory chips. Yeah. What this shows, I think, is twofold capital is chasing AI stories faster than the operating reality really justifies. And the careful thing here is that you better make sure a narrative leverage doesn't outperform and outpace your business model leverage. I'm changing my name. to Peter AI Diamandis. Yeah, we should all go sleep, not AI.
Starting point is 01:42:16 I think everybody wrote back to me and said, it sounds like pets.com all over again. This won't go anywhere. But I hate that. I like the look. Nothing holds you back. Yeah, change your name to Peter A.I. Diamandis. But if it's lipstick, it won't work.
Starting point is 01:42:32 But if you have true situational awareness, like, you know, we're suddenly aware. If they had hired, if they had hired an AI team internally and if they had done something other than just changing their name i'd buy it now they do have a kitty of some 39 million dollars i guess you can invest in this and hire people i hope they make those moves so it's not just lipstick there's a name i got to give a name story here when i joined yahoo i was talking to the senior management team and they said hey here's selimus mail and jerry ann goes well we should put him in charge of yahoo mail because his name is male so
Starting point is 01:43:11 So this was a big fight internally about what I did. I was like, no, no, please don't put me in your Yahoo! Mail, please let me go to the incubator. Can I just make a narrow point, actually? I think it's really important. Rob Fisher, who used to run our incubator, started a data center, and he's killing it. He's absolutely killing. He knew nothing about, he's a very smart guy, but he knew nothing about data centers before he started it.
Starting point is 01:43:32 He found an MIT friend. They started the company together, and they're killing it. But they're completely capital constraint. for AI Bird or All Bird or Bird AI or Bird AI or whatever they call it. They don't need to go hire Demas Asabas AI Nobel Laureate. They just need to put the capital to work in the AI funnel. They can probably go to an existing data center and say, we'll cut a deal with you to enable you to buy more hardware
Starting point is 01:43:56 and we'll just do a rev share on it. It's just that easy. So it's not, they don't have to go hire, you know, a brand new AI team to get into the AI revolution. Just use the capital you've got to get in the race. So, all right. Anyway, I hope it succeeds. I hope it does really well. Another fun story, guys.
Starting point is 01:44:14 Have you heard of the enhanced games? Of course. All right. So this is a friend of mine, Christian Angamire. I'm going to be going. It's going to be fun. Christian Angamire, Peter Thiel, Aaron DeSuzza. Start this.
Starting point is 01:44:30 And this is a no, you know, no limitations on, sorry we say, medical enhancement in the Olympic sports of swimming, track, and weightlifting. Let's play this short video for fun. Take a look at this. Let's discuss it. On Memorial Day weekend, 26, the world of sport will change forever. The enhanced games, a new era where sport meets spectacle, where records fall and traditions are rewritten. The world's best athletes.
Starting point is 01:45:07 fully unleashed and powered by science pursuing their full human potential in a safe and medically supervised environment to become faster and stronger than ever before track swimming and weight linked enhanced versus natural all in one night with a record 25 million dollars in prize media on the line staged on the Las Vegas strip and built for the record books the entertainment the capital of the world awaits its next item, including an enhanced fan experience where every attendee is a via. So they announced yesterday they're going public through a reverse merger with a SPAC, you know, going after a multi-billion dollar evaluation.
Starting point is 01:45:56 Pretty fun, pretty exciting. You know, one of the things that you have to always be concerned about is, are people going to injure themselves? They're bringing medical supervision to make sure it's safe, but it is. is, you know, all things welcome. I don't know if they have any gene therapy going on, but I'm sure there's going to be various types of hormone and medical doping going on.
Starting point is 01:46:18 What do you know about it, Alex? I think this is a seminal moment for transhumanism in sports. I think transhumanism has been shut out of athletics for a variety of reasons, mostly silly in my mind for the past few decades. And I think not only do I think this is an important moment, I made this announcement a few weeks ago. I helped launch sort of an even more enhanced version of the enhanced games.
Starting point is 01:46:47 So we're recording this on April 16th, on April 19th, this Sunday, Professional Robotics League, ProRL, is running the country's first humanoid robotic and also quadruped robot games in the Boston Seaport District. And I think there's a continuum here for the enhanced games, which are focused on bio-eo-eat. engineered humans to what ProRL is doing, which is human-controlled robots and also, in the fullness of time, autonomous robots. I think athletics is the tip of the spear for kinetic capability. And I think if we want to get to a post-human future or transhuman future, as many folks, myself included do, then having representation of, call it, low-grade transhumans at Olympic-type games is an essential first step. And athletics in general has been an entry point for so many underrepresented
Starting point is 01:47:44 classes of humans and otherwise in the history of humanity. We love competing and athletics have always been the entry point for better societal recognition for for underrepresented classes. So I think this is wonderful. By the way, you guys, do you guys want to go? Christians asked me if I'd like to invite you. It's a Memorial Day weekend. in Las Vegas. So Alex and Saleem and Dave, let me know. Sounds amazing. I'll score you guys an invite.
Starting point is 01:48:15 So this is going to be fun. I think in each of these categories are going to post, here's the Olympic record, and that's their target to blow through the Olympic records out there. It's, I think it's pretty exciting. Should we be shooting a podcast, Peter,
Starting point is 01:48:29 from the enhanced games? Well, if we all show up there, sure, let's do that. So I just need no of Saleem and Dave, if you're going to get on an airplane. I guess you're sort of meet in the middle of the United States in Las Vegas, so to speak. But, Salim, what do you think about it? I've always had an issue with the transhumanist label, because I think it's a natural instinct for humanity to improve itself.
Starting point is 01:48:54 So the whole trans thing makes no sense to me. I remember when Singularity University launched, there was a C-Nit article saying, is being led by, you know, Ray and Peter, and the noted transhumanist Salima Smell. And I had to look it up because I didn't know what the term meant. And then I researched it and I still don't understand it. I mean, Dave, you're wearing glasses. You were transhumanist because you've augmented yourself. Yes.
Starting point is 01:49:13 The minute you get a vaccination as a child, you're technically a cyborg. So we're transhumanist by definition from the beginning of time as far as I can see. So I don't understand the distinction why now versus why later, etc. I'm all for this. Obviously, the safety has to be done. And there's so much blood doping in sports that you might as well just rip the bandit off and say, let's just do it. It's like the amateurs competing in the,
Starting point is 01:49:36 in the Olympics. At some point, you just go, just let everybody compete. And I think that's the way to go. And hopefully that's where it gets. Yeah. Dave, want to weigh in? Well, I'm with Dean Kamen on this. I think the first robotics is a brilliant, brilliant thing. People should be using their minds. And he's always saying that sports is hugely inspiring for kids, but you need to keep it really clean and healthy. And so I worry a lot about role models, you know, and what the role models do. Remember Charles Barkley said, you know, I am not a role model. I was like, dude, when you're on TV and you're playing basketball, millions of people want to be just like you. Whether you want to be or not, you're a role model.
Starting point is 01:50:16 So I think it's really, really important that they're positive role models because kids will walk in their footsteps, you know. Yeah. Well, I think it'll be interesting if MIT, Harvard, Eli Lilly, all the biotech companies, basically they put teams together and they dope them to the max and see, see which research organization is going to win the competition. It's like Formula one teams. tattoos all over you. The funny thing is I would expect, so I haven't read the detailed rules for enhanced games, but I would expect if we are, as I think in the middle of the singularity, I would expect to start to see scaling law type performance in benchmarks,
Starting point is 01:50:52 as it were, in this case, world records for the enhanced games, start to take off on a really impressive trajectory from year to year. So don't even apply if you can't run a sub 10-second 100-meter dash. Maybe. And maybe the weight calculation applies. Like don't compete this year because next year the technologies will be exponentially better. So this article got me thinking about another topic, which is the speciation of humanity, all right? How humanity is going to fork.
Starting point is 01:51:24 I wrote a MetaTrens newsletter. It's coming out on Monday. And I wanted to bring the conversation here to you guys. And there are multiple forks in the road that we're going to be able to take. you know, I wrote in one of my early books, you know, we're going from evolution by natural selection, which is Darwinism to evolution by intelligent direction. And I wanted to talk about this as our final segment here. If you think about humanity speciation, we deviated, we diverged from homo sapiens and antithals diverged about 500,000, 800,000 years ago. And since then, we've had sort of at least
Starting point is 01:52:01 mini forks, right? The printing press for those who are literate versus illiterate. Industrial Revolution, you know, it was a fork between those who own machines versus those who worked the machines. Internet split us between the networked information
Starting point is 01:52:17 and those who weren't networked. And what I'm seeing here are these. We've talked about the creator versus consumer. You know, are you going to be a couch potato or are you going to use this AI to go and create an extraordinary business, longevity, escape velocity, are you going to go on that journey? Do everything you can
Starting point is 01:52:37 to go to 120, 150, indefinite. You know, I don't want to talk about immortality, but, and then are you going to put a chip in your brain? Are you going to, you know, connect your neocortex to the cloud? One of the ones that's my favorite from the nine-year-old inside me, Earth versus the stars, you can stay on the planet, or are you going to go explore the cosmos? And then And finally, will you become a digital upload? Are you going to follow in the footsteps of the company that Alex has been supporting and funding to digitize your 100 trillion neurons and become an upload? So I'd love to ask you guys where you fall on this and have a conversation. Let's take one at a time on longevity escape philosophy.
Starting point is 01:53:25 Let's push it to the extreme for this conversation. Selim, there is a treatment that comes out that will keep you locked in at 30 years age forever. It's immortality treatment. Do you take it? Like the movie In Time? I would say no. Really? Because, yeah, I would say no because all the evidence that I've seen points to reincarnation as a real possibility for the future.
Starting point is 01:53:57 What kind of like an action? Transhumanist are you, Salyam. I'm really surprised. You're not representing. I'm going to, I don't have religious view on this. I'm just seeing where the data is and that seems to be where it is. The, you know, there's definitely not a Western style, heaven, hell type of thing waiting. So let's kind of wave that out of the equation.
Starting point is 01:54:17 But if that's the case, I think of life as a cyclical learning pattern. And if you're an actor, you don't want to be playing the same movie all the time. You want to take on different roles. I would like to be a... So I would say no, because that's part of the experience of the soul, is to have different experiences, whatever, however that takes place. And if I'm stuck as one 30-year-old, I would find that boring after time and not a rich enough varied experience.
Starting point is 01:54:45 DB2, you're given a therapy option. 30 years old indefinitely, immortality. Do you take it? Wait, Dave wanted to respond to what I said. I'm just surprised. I mean, I'm going to say, yes, are you kidding me? Of course I'm going to do that. I think you can change over time tremendously while still being a 30-year-old body.
Starting point is 01:55:03 If I'm 90 in pain, I may go, damn it, give me the damn 30-year-old juice, right? Okay, so Dave is a reincarnated and I come back as like a spider. I don't want to take that chance. I'm sticking as what I got. Alex, can I assume you're all in until you upload yourself? Yes, obviously. Next question. Okay, thank you.
Starting point is 01:55:24 Next question, BCI. there is a advanced version of Neurrelink or Merge Labs or Paradromics or Open Water, and it's able to provide you high bandwidth brain computer interface to the cloud. You've got high connectivity, infinite memory and context, the ability to recall, understand. It's a extra corpus callosum, if you would. And the question is, it's been done safely in 100 people. Would you be number 101 for this? BCI implant after a hundred contiguous safe implants. Dave. Yeah, I'm probably the only know on this
Starting point is 01:56:04 podcast. You know, my AI agents are, well, look like the AI agents are coming back to me with information at an incredible rate and I can barely keep up with its thoughts. And so then the idea that somehow it's going to bypass and get right into my head somehow, I just don't see how that works. What I see is like the BCI becoming kind of like a drug, you enjoy, you know, you're enjoying it. You feel like you know everything going on, kind of like you're on mushrooms or whatever. Like, suddenly, oh, it makes sense to me.
Starting point is 01:56:33 Then you're like, wait, no, it didn't make sense. I, you know, but I can't assimilate information any faster than the AI is coming back with it already. And I don't see how bypassing my, my eyeballs is going to help that problem. I'm so blown away. I would totally, I go the opposite way that you did it on me. I totally would be for it. Celine, would you be 1001?
Starting point is 01:56:53 Yeah, I'd be totally into this. Wait, so, Celine? Salim, when you're reincarnated, what happens to your exocortex via your BCI? I have no idea, but it'll be fun to see what happens. And does it dive? Speaking of speciation and forks, does it diverge from your trajectory after you're reincarnated? Maybe, and that would be okay too. I mean, you know, let a thousand flowers bloom.
Starting point is 01:57:17 Dr. Wiesner Gross, are you number 101 on this experiment? I don't like the question. So the question I really, really? The question I really... No, that was the question. You can answer it and then diverge. Okay, fine. So I'll answer it with a conditional no-but.
Starting point is 01:57:35 What? Hold on. The question that you really, in my mind, should have asked me is, would I be user number approximately a million if it is upgradable? Yes, probably. Okay, if it's upgradable, would you be 101? I am trying to get your risk profile, and how interested you are in this?
Starting point is 01:57:54 Very interested, but as with any new invasive drug, you don't, generally speaking, unless you're forced to, I would say, this is not medical advice. You don't want to be user number 100,000 or million. So after 100,000 or a million, if it's upgradable, if I can walk through metal detectors, if it has sort of all of these nice affordsists. You have a lot of conditions, Alex. You have a lot of conditions for a god-like humanist, Peter. to you for free. At least I'm not demanding to be reincarnated alongside my BCI, so I'm not that fussy. Well, you know, one thing I really love is the BCI originally, people were like, look, an enhanced human being is going to be so hyper-competitive.
Starting point is 01:58:40 You can't keep up, so everyone's going to need to get this just to be competitive in the world. It turns out that's not going to happen. The AI is improving so quickly that the enhanced human beings completely irrelevant compared to the superhuman AI two years from today anyway. The only way is coupling with AI. I would take a completely different position, Peter. And if anything like while we're busy pointing fingers at each other saying, no, you're a bad transhumanist. No, you're a bad transhumanist.
Starting point is 01:59:05 I'll say that wanting to be user number 100 of a BCI is actually being a bad transhumanist. Why? Because it's intrinsically betting that progress is going to be so slow that you need to be user 100 versus waiting a year or two for technology to advance exponentially so that you get to inject yourself. I say it's upgradable. Yeah, you said the right on the question
Starting point is 01:59:27 it's upgradable. You framed it very well, Peter. If it's safe enough rateable, I'll probably do it. Okay. Peter, where do you fill? You guys didn't ask. I would jump on the longevity escape velocity bandwagon, of course. And yes, I would be 101 on the BCI.
Starting point is 01:59:44 I've got like the highest. You'd be yes on all of these, Peter. Of course. I would be, I'm going to discuss number five in a moment, but Earth versus the stars. And we're going to be forking there. I remember back when I was in graduate school, I wrote a paper on speciation.
Starting point is 02:00:01 What is speciation? Speciation occurs when there's a small population size in a geographically isolated area, right? This is basically the finches on the Galapagos Islands and with a high environmental pressure. And we're going to see that in space, right? If you go to the moon and you're born on the moon, and you don't develop the cardiac and musculature and bone,
Starting point is 02:00:25 you're stuck on the moon. And there's going to be a species of humans that are lunites or whatever you want to call that future version. Lunatics. Yeah, lunatics. They're lunatics. They're lunatics. Anyway, so there will be speciation in space.
Starting point is 02:00:41 But here is the question. If you have a one-way ticket to go and explore an earth-like planet that is beautiful and exciting. Would you go? Do you have that exploration gene, the desire to go and see the cosmos? We might vary this a little bit and say, would you go to settle on Mars,
Starting point is 02:01:08 would you go to settle on the moon, versus staying on the Earth? Where do you come out on this? Alex, let's start with you. Okay, in the immortal words of the Star Trek Borg Queen, you imply a disparity where none exists, Peter. You're posing Sophie's choice-type questions about one- Of course, I'm trying to make this fun.
Starting point is 02:01:29 No, but you're implying it, stop dodging the question. In all seriousness, you're posing on the one-hand, one-way trips, on the other hand, you get to go to the stars. This is a false choice. This is like a Sophie's choice that you're posing to transhumanists. Yes, I'm running this. I'd love to go to the stars, but I don't buy the premise. that it's a one-way trip or needs to be a one-way trip.
Starting point is 02:01:52 I literally wrote the paper on why intelligence manifests as optionality maximization. What is my purpose here, Alex? It's to discover your level of risk aversion, your level of desire for extremes. That may be your purpose. But the preferences that you're actually revealing are more how bad at optionality maximization are in the face of taking true. transhumanous technologies. Do you know Peter's laws?
Starting point is 02:02:23 Peter's law is number one, if anything to go wrong, fix it tell with Murphy. Number two was when given a choice take both. I love optionality, but guess why? I've suffered badly from law number two, Peter. Like, let's do both. And I'm like, we can't. Let's do both, we can't.
Starting point is 02:02:37 So, all right. I have a couple of quick comments to make here. Please. You know, speciation, the, it turns out, so the last Neanderthals died out about 40, out about 40,000 years ago. And this time right now is the only time we only have one species of humanoid. So there's a cool case for saying we'll have a bunch more coming at the point in
Starting point is 02:03:02 the future according to one of these or more of these splits. The other thing I really would urge people to do if you've not done it, Brian Johnson, the longevity tester fellow is publishing everything about what he's doing. Recently did a 5MEODMT psychedelics trip and streamed it live. And you you really want to go check out what his response was after doing it. He's like, I've so been focused on this longevity stuff, but like, it's so incredible what I experience that nothing matters anymore. So I still go do this, but it was great. I had that exact experience when I did that journey and I came out of it and I said, oh my
Starting point is 02:03:38 God, my whole longevity quest. I think this is... You know what? I still want longevity. Yeah, it's fine to have. I'm not decrying it at all. I'm all for it. I mean, just for all of the good reasons around private.
Starting point is 02:03:49 progress, etc. What's something I just want to point out is as we look at this overall push towards AI, which is fantastic, it allows human beings to do less of the doing and much more of the being. And I think that's the profound opportunity we have as we named ourselves correctly, human being. We did. But Salim, answer the question on Moon, Mars, a distant star. What's your interest?
Starting point is 02:04:13 The question is... Would you move irreversibly to the Moon, to Mars, or to an Earth-like? planet. No. No. I really like my... I do it as an avatar or I do it as a, I go kind of to Alex's question. This is a flawed question. I really like sitting on the beach or playing kind of. I'm trying to figure out human speciation. If we don't have people permanently moving in a direction, you're not going to get speciation. There's lots of people who want to do that. They're welcome to go. I really like sitting on a beach with a glass of wine. We're also ignoring them implicitly ignoring the possibility of mergers. Like why, why is speciation necessarily a one-way door?
Starting point is 02:04:49 at this point. If we have the ability, yeah, we'll have the ability to merge cyborgs into organisms into uplifted animals and all sorts of other crazy combinations.
Starting point is 02:04:58 Yes, we will. Dave Blondon, your answer, my friend. I'm a huge believer in terraforming and I think the Ian Banks culture series view of the world or the future. So Alex is right.
Starting point is 02:05:08 We're going to discover new physics and God knows what's going to be possible. But I think I would not move to Mars or to the moon. The gravity is off. There's a lot of reasons it won't be nice. But I would absolutely,
Starting point is 02:05:19 an instant go to another star that has a, you know, a terraformed world that, you know, we've got the mass right, we've got the orbit right. Yeah, terraforming, I think, is a massive part of humanity's future. I mean, there's a, there's a beautiful element. When I think about what moment in history I would love to go and explore, it is the period of the great explorations, right? It's the 14, 1500s. Of course, without the scurvy and the death and the disease and all of that stuff, but just the idea of going and exploring uncharted lands, right? The whole thesis of Star Trek, it just, you know, excites the nine-year-old to me. By the way, Salim, going back to sort of the brainwashing of religion, I got brainwashed by Star Trek as a
Starting point is 02:05:59 religion early on. I think within Star Trek, the Genesis Project is the most important concepts, you know, like that, that I think is very real. It's huge. Can I make a point here, Peter? Yeah, of course. You weren't brainwashed in a sense that you were giving an absolute truth and told to believe in that, uh, assumptive truth, right? Well, you came across. is a paradigm of imagination and what is they call it in the Roddenberry, infinite diversity, infinite creativity? No, it's infinite diversity and infinite combination, the Vulcan IDIC. Idaq, yes.
Starting point is 02:06:32 Okay, so you got grabbed by that, and that's not ideology, I would suggest. That's just absolute imagination run free in a wonderfully beautiful way. Our final fork, the AWG Digital Consciousness Fork, the technology to completely digitize your 100 trillion synaptic connections and upload you to the cloud. It is destructive in process. You and your brain will not exist at the end of that, but you are guaranteed to be uploaded. Do you do it? Alex, let's kick it off with you. Well, I guess the elephant in the room is that I helped form a company called Eon Systems. Encourage you to check out Eon Systems if they're very interested in this. I think first generation uploads will be destructive. I think second, third,
Starting point is 02:07:15 fourth generation uploads won't be destructive. If I had a choice, if it were a life and death situation and my alternative is death, I would choose a destructive upload. If I have choices, again, going back to my earlier comment, if I have a choice and it's sort of an elective uploading, no, I wouldn't choose a one-directional destructive uploading. I'd wait for third or fourth generation uploads that can be done non-destructively or incrementally. All right, Dave, how about yourself? Yeah, no way.
Starting point is 02:07:43 I think you would not upload yourself. Not even close. I love the idea of having agents out there doing huge amounts of work and bringing them back to me, but the idea that I would ever destroy my meat body and think that that's still me, even though if it's an exact synaptic clone, it's still not me. Love it. Love it. Salim. A hard no, because I think consciousness goes through the body, and so therefore, if you could replace the synaptics, you'd have to replace lots of other stuff. But I would go with Alex's thing. If I know if it's not a destructive process, I'd be good with it. Yeah. I'm a no on the discerptych. process as well, which was my question. So with that, I'm going to go to our outro music here,
Starting point is 02:08:20 which is a celebration, Alex, of Solve Everything, a beautiful piece. I enjoyed this gentleman. Such a pleasure as always. This was a fun conversation. I really, really loved it. We need more of these. Yeah, for sure. All right, onwards to solve everything. This is brought to us by James Petz. Thank you, James. If you've got outro or intro music, send to us at Media at Deamandis. And if you've got an AI-driven company that you want to present in a 60-second video, that's all AI top to bottom, send us that video. And if it's super cool, we'll share it. All right, let's run this.
Starting point is 02:09:33 Gentlemen, a pleasure as always. And we'll see you soon. What an exciting day it was today. If you made it to the end of this episode, which you obviously did, I consider you a moonshot mate. Every week, my moonshot mates and I spent a lot of energy and time to really deliver you the news that matters. If your subscriber, thank you. If you're not a subscriber yet, please consider subscribing so you get the news as it comes out. I also want to invite you to join me on my weekly newsletter called Metatrends. I have a research team. You may not know this, but we spend the entire week looking at the meta trends that are impacting your family, your company, your industry, your nation.
Starting point is 02:10:37 And I put this into a two-minute read every week. If you'd like to get access to the Metatrends newsletter every week, go to DeAmandis.com slash Metatrends. That's DeAmandis.com slash Metatrends. Thank you again for joining us today. It's a blast for us to put this together every week. Okay. When I sell my business, I want the best tax and investment advice. I want to help my kids, and I want to give back to the community.
Starting point is 02:11:15 Ooh. Then it's the vacation. of a lifetime. I wonder if my out of office has a forever setting. An IG Private Wealth Advisor creates the clarity you need with plans that harmonize your business, your family, and your dreams. Get financial advice that puts you at the center. Find your advisor at IDPrivatewealth.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.