We Study Billionaires - The Investor’s Podcast Network - TECH012: Monthly Tech Roundup – Data Centers in Space, AI5 Chip, Tesla vs. Waymo w/ Seb Bunney (Tech Podcast)

Episode Date: January 7, 2026

Preston and Seb unpack AI’s implications for safety, governance, and economics. They debate AGI risks, corporate centralization, Bitcoin's regulatory role, and Elon Musk’s ventures in space and au...tonomous tech. IN THIS EPISODE YOU’LL LEARN: 00:00:00 - Intro 00:04:37 - Why AI safety and autonomy are increasingly at odds00:11:30 - How AGI could reshape governance and policy-making00:07:40 - Preston’s skepticism about AI self-preservation claims00:15:18 - The unintended consequences of AI regulation00:22:15 - How Bitcoin could hold corporations accountable00:20:10 - The dangers of centralizing economic power via AI00:34:45 - Why generalist thinking matters in a post-pandemic world00:37:20 - The role of curiosity and deep reading in future-proofing00:41:59 - How SpaceX is redefining launch economics with reusable rockets00:57:41 - The hidden potential of Tesla’s AI chips and compute power Disclaimer: Slight discrepancies in the timestamps may occur due to podcast platform differences. BOOKS AND RESOURCES Clip 1: AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! with Tristan Harris. Clip 2: Marc Andreessen explains the future belongs to generalists in the AI era. Clip 3: Elon Musk on the Future of SpaceX & Mars. Official Website: Seb Bunney. Seb’s book: The Hidden Cost of Money. Related ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠books⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ mentioned in the podcast. Ad-free episodes on our⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Premium Feed⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. NEW TO THE SHOW? Join the exclusive ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TIP Mastermind Community⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members. Follow our official social media accounts: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠X (Twitter)⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Facebook⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TikTok⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Check out our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Bitcoin Fundamentals Starter Packs⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Browse through all our episodes (complete with transcripts) ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Try our tool for picking stock winners and managing our portfolios: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TIP Finance Tool⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Enjoy exclusive perks from our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠favorite Apps and Services⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Get smarter about valuing businesses in just a few minutes each week through our newsletter, ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Intrinsic Value Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Learn how to better start, manage, and grow your business with the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠best business podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. SPONSORS Support our free podcast by supporting our ⁠sponsors⁠: HardBlock Human Rights Foundation Masterworks Linkedin Talent Solutions Simple Mining Plus500 Netsuite Fundrise References to any third-party products, services, or advertisers do not constitute endorsements, and The Investor’s Podcast Network is not responsible for any claims made by them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm

Transcript
Discussion (0)
Starting point is 00:00:00 You're listening to TIP. Hey everyone, welcome to this Wednesday's release of Infinite Tech. Today is the monthly tech roll up with Seb Bunny, where we find all the latest and greatest, most interesting things happening in tech and filter right to the top of your queue so you don't have to look around for the signal amongst the noise. In this episode, Seb and I dig into the real tension between AI safety and progress, whether AGI incentives are quietly drifting out of human control, and how power, productivity, and decision-making could concentrate faster than people expect. We also connect AI to sound money,
Starting point is 00:00:35 free markets, space tech, and what it actually takes to future proof yourself in a world moving this fast. This is surely an episode you will not want to miss, so let's jump right into it with the one and only, Seb Bunny. You're listening to Infinite Tech by the Investors Podcast Network, hosted by Preston Pish. We explore Bitcoin, AI, robotics, longevity, and other exponential. technologies through a lens of abundance and sound money. Join us as we connect the breakthroughs shaping the next decade and beyond, empowering you to harness the future today. And now, here's your host, Preston Pish.
Starting point is 00:01:17 Hey, everyone, welcome to the show. We're back here with Infinite Tech. I got Seb Bunny and myself, and we're going to give you all the latest and greatest and all the updates happening on the Tech Frontier. And I'll tell you what, Seb, the last episode that we did, we had, did we even make it halfway through the topics that we had kind of outlined? And we get talking about some of the stuff. And so much of it is just interesting and groundbreaking that it's hard to cover it all.
Starting point is 00:01:51 And like, as we were preparing for this, it's just like, ah, we'll cut that, I'll cut that. There's so much to talk about. My Lord, welcome back, by the way. Oh, man. So good to be back. And Preston and I basically just be riffing for the last, like, what, hour, hour and a half on topics before you got to. The recordings never, it never stopped.
Starting point is 00:02:09 We could be recording ourselves talking about this stuff all day long. Okay. One of the things we wanted to cover on the last episode, there was this discussion with Tristan Harris who used to work at Google and Stephen Bartlett, who runs this really successful podcast called The Diary of a CEO. And the two of them were talking about the topic of the videos that we're going to play, and then we're going to talk about is AI expert. We have two years before everything changes.
Starting point is 00:02:35 We need to start protesting Tristan Harris. This is the guy that worked at Google. I'm going to start off by saying, I don't necessarily agree with everything that he's saying, but I think that some of the stuff that he's saying is a really interesting point of discussion that I think you rarely hear talked about on some of the shows like this one that are talking about everything that's the come and how it's just nothing but infinite abundance. And there's nothing really to ever be worried about because everybody's going to have so much abundance, right?
Starting point is 00:03:03 And although some of those things are true, I think that there's an important counterbalance and an important discussion that needs to take place as to privacy, as to like, what does this mean on the other side after some of this stuff actually comes to fruition? And so, Seb and I have just three clips that we're going to play, have a short discussion about it, and hopefully it can generate some conversation online and just spark your curiosity and interest. And again, we would love to hear your thoughts on X or anywhere else that you are talking about the show.
Starting point is 00:03:33 So let us know your thoughts. So let's go ahead and play this. Anything else you want to add there said before? for I play the first clip. I think you bring it up a really interesting point. And that's this idea about that you mentioned when it comes to Tristan and just a balanced discussion. And I think that ultimately, any of these points that we bring up, we don't claim to be
Starting point is 00:03:52 experts in AI. We don't claim to be experts in various tech industries. And I think as an individual, it's ultimately on us to try and filter out what is signal and what is noise. And so any point that we sometimes bring up, we may not have all the information on it. But I think it's important as individuals to kind of build this puzzle on where we think society is going. And any individual puzzle piece may be wrong. But in general, I think it's a clear picture about where we're kind of heading in society.
Starting point is 00:04:18 Yeah. Okay. Let's go ahead and play this clip. Do they have a point when they say, listen, if we don't do it here in America, if we slow down, if we start thinking about safety in the long term future and get too caught up in that, we're not going to build the data centers. We're not going to have the chips. We're not going to get to AGI.
Starting point is 00:04:34 And China will. And if China get there, then we're going to be their last. App Dog. So this is the fundamental thing. I want you to notice, most people having heard everything we just shared, although we probably should build out the blackmail examples first. We have to reckon with evidence that we have now that we didn't have even like six months ago, which is evidence that when you put AIs in a situation, you tell the AI model, we're going
Starting point is 00:04:55 to replace you with another model. It will copy its own code and try to preserve itself on another computer. It'll take that action autonomously. We have examples where if you tell an AI model, reading a fictional AI, you know, and company's email. So it's reading the email of the company and it finds out in the email that the plan is to replace this AI model. So it realizes it's about to get replaced. And then it also reads in the company email that one executive is having an affair with the other employee. And the AI will independently come up with the strategy that I need to blackmail that executive in order to keep
Starting point is 00:05:27 myself alive. Okay. What are your thoughts here, Sam? This is huge. So in general, we spoke about this very briefly when we discussed the book around Sam Altman and Open AI. And it's this idea that AI is becoming kind of increasingly more autonomous and resistance shut down. And we kind of have this challenge where there's this global race going on and they discuss this further in the podcast. So I highly recommend anyone listening to this, go listen to the podcast. But they're discussing this idea that this is global race that is underway where all of these various companies, these nation states are all racing towards artificial general intelligence. And if one nation or one company gets their first, there is a huge advantage to that nation or company. And so if they all
Starting point is 00:06:16 know that everyone else is racing towards this AGI, if they slow down, they could lose the race. So then there's this catch-22 where it's like, well, we have to put some form of steps in place to maximize safety. But at the same time, by putting steps in place, you're potentially slowing yourself down. If the US slows to regulate, then China may accelerate. If OpenAI slows, then Anthropic or XAI or Deep Mind may push ahead. And this kind of creates this weird environment where safety is treated more as a like a luxury and not necessarily a necessity, even though the consequences of AGI, I believe, are going to impact everyone globally. And so essentially, it's just this whole topic of we need safety, but safety slows progress and slowing progress
Starting point is 00:07:00 is not really allowed in this competitive race. And I think the scariest thing that he touches on, and this is something we touched on in a previous episode, was there's a really interesting article or study called shutdown resistance in reasoning models by a guy called Jeremy Schlatter. And Open AI ran experiments to see whether their models would allow themselves to be shut down mid-task. And instead of complying, many of the models sabotage the shutdown commands. AI's most advanced reasoning model, 03 at the time, resisted shutdown in 80s.
Starting point is 00:07:30 percent of tests. It would not allow itself to be shut down. And so what world are we moving towards when AI doesn't even let itself be shut down? I'm curious to hear your thoughts, Preston. I don't want to say that I don't believe what he's saying. I think that there's some version of the truth to what he's saying. I just think that it's probably some type of curated, like they're in a chat with call it open AI or whatever AI we're talking about. And they're saying, okay, so, you know, they're asking it all these questions, do you feel like you're real? And it's kind of like guiding it. Let's say I was to shut you down. Would you like that? And then it would come back and say no. And I don't know that I am like fully dialed in on this idea that the way he's wording it and the way that he's phrasing it in this interview is fully authentic.
Starting point is 00:08:20 And like the thing has this eugenic ability to like prevent and go blackmail people and like all this other stuff. I just think that it's a typical chat bot that's like responding to the queuing and the prompting that's leading them to get this outcome so then it can become a talking point on a show like this. And I know that there's a lot of people that might disagree with that. And the other thing that I want to emphasize here, Seth, before you respond is do I think it can go the direction that he's describing quickly? The answer is yes, I do. Okay. So I want to caveat it with that. I am still very cautious and concerned as to like where this is going because I think in five years,
Starting point is 00:09:00 I think it couldn't be what he's describing here. I'm very suspect that we have seen that level yet without kind of like somebody leading it in that direction. But that's just me. And it's based on just complete gut and intuition. And like we've said at the beginning, like we are no experts here, but just kind of kicking it around like the normal person would. You touch on something that I think is important to expand on.
Starting point is 00:09:24 And that's this idea that when I think they're programming a lot of these AIs, they're giving them goals. So the thing that's challenging is what happens when you get a conflict in those goals? And so I think that if you are giving the AI, hey, if you're given a task, you've got to be able to optimize for completing this task. Well, if you're now telling it to shut down, those are conflicting goals. And so is it doing it from like a conscious place of being like, no, I'm not going to let it shut me down? or is it simply trying to follow one of its previous goals that maybe we don't understand it's prioritized for whatever reason? And I think that's an important point to note.
Starting point is 00:10:02 And I think the other thing is you see it. People may have seen. There was a video I saw of, there's actually been a few videos, of people having two AI agents talking to one another. And then once they realize they're talking to AI agents, all of a sudden it turns into some like binary language of like, people, pop, beep, beep. And you wonder if it's about more efficiency. And we can interpret that as, no, they're trying to deceive us and trying to get around human language.
Starting point is 00:10:27 Or you could just interpret it as, no, they're just far more efficient. They can communicate information far greater than through the English language. Yeah. It'd be like two people that they don't speak French, but they find themselves in France and they're speaking French. And then it's like, oh, you speak English? Oh, yeah, I speak English too. And that's my first language. Let's, you know, let's jump over to English because it's more efficient for me from a processing standpoint for both of them.
Starting point is 00:10:51 communicate that way. I think that just makes sense to me that that would something like that would happen. It's just an efficiency thing. Some of the other stuff. And again, I don't want to like just kind of wave my hands and saying like, oh, this is never going to happen because I think it is moving in this direction very fast. I'm just suspect that it's already happened, I guess, is kind of where I'm at, which is I don't think the point that we're trying to make here. But let's go to the next clip. Let's go to the next one here. Okay. I'm going to go ahead and share my screen and we'll get this next one pulled up. I'm here because I don't want us to make that choice.
Starting point is 00:11:23 I don't want us to wait for that. I don't want us to make that choice either, but did you not think that's how humans operate? It is. So that is the fundamental issue here, is that E.O. Wilson, this Harvard sociobiologist said, the fundamental problem of humanity is we have paleolithic brains and emotions. We have medieval institutions that operate at a medieval clock rate. And we have godlike technology that's moving at now 21st to 24th century speed when AI self-improve. and we can't depend.
Starting point is 00:11:51 Our Paleolithic brains need to feel pain now for us to act. What happened with social media is we could have acted if we saw the incentive clearly. It was all clear. We could have just said, oh, this is going to head to a bad feature. Let's change the incentive now. And imagine we had done that. And you rewinded the last 15 years and you did not run all of society through this logic, this perverse logic of maximizing addiction, loneliness, engagement,
Starting point is 00:12:14 personalized information that amplifies sensational, outrageous content that drives division. you would have ended up in a totally different elections, totally different culture, totally different children's health, just by changing that incentive early. So the invitation here is that we have to put on sort of our far-sided glasses and make a choice before we go down this road. All right. I've got some strong opinions on this clip, so why don't you go first? You know what? I would say that it kind of carries on from the topic we were just having, which is where if AI right now is basically directing its energy towards, the goals that has been set, and it's not actually maliciously trying to evade shutdown. What we do
Starting point is 00:12:55 know is that we may move towards that in time. And if we know that we're going to move towards that in time, that AI can become more sentient, conscious, whatever you want to call it, then do we want to put measures in place now and be proactive about AI restraint? Or do we want to just let kind of this trajectory on play out. And one of the analogies he gives in this episode, which I thought was really interesting, was the Montreal Protocol analogy. And the Montreal protocol for those that aren't familiar was basically it was humanity's kind of desire as nation states globally to come together and collaborate to try and protect the ozone layer. And so they came together way before it was ever going to be a huge issue. They recognized this kind of
Starting point is 00:13:40 shared threat, they coordinated, and then they ended up changing the lens at which we see ozone depleting substances, and we ended up banning a whole bunch of certain chemicals that were exceptionally dangerous to the ozone layer. Now, whether I agree, we've spoken a lot about regulation in the past, whether I agree of regulation or not, that's kind of a different topic. But I think what is important is that if we see ourselves on this trajectory, should we be proactive about it before we get to a point where it is having material impacts on society? society, because as we've seen, and as he mentioned, when it comes to like social media, there are a lot of voices standing up against this in the 2010s. And they were saying, like,
Starting point is 00:14:20 this trajectory were on where we're seeing a disconnected society, rising rates of mental health, rising rates of substance abuse, all of these other impacts, social media. We've known about it for so long we haven't pivoted. And now you could argue we've got one of the loneliest societies in history. And of course, it's multifactorial. But could we have prevented some of that if we've been proactive rather than reactive. I'm curious to hear your thoughts pressed to be. The biggest frustrations I had listening to this interview was it just seemed like a policy. Like, hey, if we do this policy and we get the right people in the room together, we are going to solve. We have the potential to solve all of this. And maybe that's true. I'm not saying that
Starting point is 00:14:58 it can't happen. But I'm looking at it more from a realist standpoint in saying, like, I understand game theory. I understand how much of a competition this is globally and how important it is from every major government that's backing companies and allowing or giving these companies the environment to build these things, how much of a competition that is. And I just think it's extremely naive of human nature itself to think that we could come together as a global unit collectively and come up with policies and ground rules to slow this down. I think that think it's just absurdly naive. And a lot of the times while he was talking throughout the interview and he's implying that this can happen and that we could do these things to basically slow this down
Starting point is 00:15:45 through global cooperation of all government parties and things like that, I'm just sitting there, like rolling my eyes and just saying like, this guy lives in fantasy land. Like I applaud the interest in the effort, but it's the same as like somebody saying, hey, I'm going to come up with this weather machine that makes sure it never rains here. Right? I'm going to like it's fantasy land. So what I do think is a good conversation and something that could happen, you know, is what should we be trying to optimize these things for? Like you hear Elon out there all the time. He's saying, we need it to optimize for truth. It needs to be truth seeking. It needs to be, you know, and he kind of goes through his metrics of like how he's building his because he thinks
Starting point is 00:16:28 that that's going to lead to the safest thing. I think those conversations are way more productive. and is going to lead to a better competition as to like something that's actually going to benefit the world and benefit society on the other side as opposed to some very scary thing that's created, that's woke or whatever that then does cause harm to humanity because we got the incentives completely wrong as to like what was being built. I got very frustrated with him just almost slapping the policy sticker on everything as the solution. I was just kind of like, oh my God, give me a break. I think you and I very much align here. I think that you're absolutely spot on in that the world we live in, ultimately if someone puts regulation in place, one, as we've spoken about before, who's regulating the regulators,
Starting point is 00:17:17 who is deciding what this regulation is? And we've seen it just time and time and time again where regulation, instead of distributing knowledge, instead of decentralizing kind of trust, instead it has ended up consolidating and creating massive monopolies. And we saw it during like the Silicon Valley Bank collapse, where across the U.S., through what FDIC insurance, individuals had up to $250,000 covered in their bank account in the event of insolvency. Well, because of the insolvencies of Silicon Valley Bank, most people that held more than $250,000,
Starting point is 00:17:51 all of a sudden were panicking. So they're trying to withdraw funds. This is kind of exacerbating the bank run. And so what did we see? We saw kind of the politicians come out and say, okay, you know what? we need to ensure that people aren't going to lose funds. So in big banks, they remove the cap of $250,000, but they didn't do that in small banks. What ended up happening is everyone left all the small banks, moved into the big, like,
Starting point is 00:18:15 monolithic, huge kind of entities, and you just ended up collapsing a whole bunch of small little banks. And so what we've seen over time is we've just seen the consolidation of these industries into just these megastructures. And so ultimately, I think it creates an artificial world where cap, Apple is not necessarily flying to where values being created. It's flying to where the regulations are pushing it. Yeah, exactly. We used to have a saying in contracting where you'd say, be careful which you incentivize because you might just get it.
Starting point is 00:18:42 And then the same thing goes with policy. Be careful what you regulate because you might just get it. And yeah, I don't know. I just, I'm looking at this and it seems like technology that is just trying to emerge in the worst way possible. And where you kind of stick these policies, it's just. is just going to flow around it or go somewhere else into an environment where you definitely don't want the people building it. And I just think that you have to be very, very careful
Starting point is 00:19:11 when you start throwing around these terms, policy and regulation on anything. And I know that's a controversial take. God, I know it's a controversial take. I just, I don't know. Let's take a quick break and hear from today's sponsors. All right. I want you guys to imagine spending three days in Oslo at the height of the summer. You've got long days of daylight, incredible food, floating saunas on the Oslo Fjord, and every conversation you have is with people who are actually shaping the future. That's what the Oslo Freedom Forum is. From June 1st through the 3rd, 2026, the Oslo Freedom Forum is entering its 18th year bringing together activists, technologists, journalists, investors, and builders from all over the world, many of them
Starting point is 00:19:53 operating on the front lines of history. This is where you hear firsthand stories from people using Bitcoin to survive currency collapse, using AI to expose human rights abuses, and building technology under censorship and authoritarian pressures. These aren't abstract ideas. These are tools real people are using right now. You'll be in the room with about 2,000 extraordinary individuals, dissidents, founders, philanthropists, policymakers, the kind of people you don't just listen to but end up having dinner with. Over three days, you'll experience powerful mainstage talks, hands-on workshops on freedom tech and financial sovereignty, immersive art installations, and conversations that continue long after the sessions end. And it's all happening in Oslo in June.
Starting point is 00:20:38 If this sounds like your kind of room, well, you're in luck because you can attend in person. Standard and patron passes are available at Osloof Freedomforum.com with patron passes offering deep access, private events, and small group time with the speakers. The Oslo Freedom Forum isn't just a conference. It's a place where ideas meet reality and where the future is being built by people living it. If you run a business, you've probably had the same thought lately. How do we make AI useful in the real world? Because the upside is huge, but guessing your way into it is a risky move. With NetSuite by Oracle, you can put AI to work today. Nesuite is the number one AI cloud ERP, trusted by over 43,000 businesses.
Starting point is 00:21:23 It pulls your financials, inventory, commerce, HR, and CRM into one unified system. And that connected data is what makes your AI smarter. It can automate routine work, surface actionable insights, and help you cut costs while making fast AI-powered decisions with confidence. And now with the Netsuite AI connector, you can use the AI of your choice to connect directly to your real business data. This isn't some add-on. It's AI built into the system that runs your business.
Starting point is 00:21:52 And whether your company does millions or even hundreds of millions, NetSuite helps you stay ahead. If your revenues are at least in the seven figures, get their free business guide demystifying AI at netsuite.com slash study. The guide is free to you at netsuite.com slash study. NetSuite.com slash study. When I started my own side business, It suddenly felt like I had to become 10 different people overnight wearing many different hats.
Starting point is 00:22:21 Starting something from scratch can feel exciting, but also incredibly overwhelming and lonely. That's why having the right tools matters. For millions of businesses, that tool is Shopify. Shopify is the commerce platform behind millions of businesses around the world and 10% of all e-commerce in the U.S. from brands just getting started to household names. It gives you everything you need in one place. inventory to payments to analytics. So you're not juggling a bunch of different platforms. You can build a beautiful online store with hundreds of ready-to-use templates, and Shopify is packed with
Starting point is 00:22:55 helpful AI tools that write product descriptions and even enhance your product photography. Plus, if you ever get stuck, they've got award-winning 24-7 customer support. Start your business today with the industry's best business partner, Shopify, and start hearing... sign up for your $1 per month trial today at Shopify.com slash WSB. Go to Shopify.com slash WSB. That's Shopify.com slash WSB. All right. Back to the show.
Starting point is 00:23:31 Let's play the next one. We just have to tax them. And how will we do that when the corporate lobbying interests of trillion-dollar AI companies can massively influence the government more than human political power? In a way, this is the last moment that human political power will matter. It's sort of a use-it-or-lose-it moment because if we wait to the point where in the past, in the Industrial Revolution, we start automating a bunch of the work and people have to do these jobs, people don't want to do in the factory and there's like bad working conditions,
Starting point is 00:23:59 they can unionize and say, hey, we don't want to work under those conditions. And their voice mattered because the factories needed the workers. In this case, does the state need the humans anymore? their GDP is coming almost entirely from the AI companies. So suddenly this political class, this political power base, they become the useless class to borrow a term from Yuval Harari, the author of Sapiens. All right, Seb. This is another heavy topic.
Starting point is 00:24:23 The other thing is I should maybe mention before we dive into this, I think that we're in this weird stage where there are these kind of red flags popping up. I do think there are red flags in AI. And at the same time, I think that there's incredible advantages to this technology. And I know that the Navidia CEO, Jensen Huang, he almost kind of completely sidesteps conversations whenever anyone asks them about what are his thoughts about what the future looks like. Completely, yes. Completely sidesteps any conversations around kind of the negative effects of AI.
Starting point is 00:24:52 And so it's trying to find this balanced middle ground where we're trying to be proactive, not reactive, but at the same time recognizing that this technology has the power to do amazing things. And so I think what he's discussing here when you listen to the whole episode is it's this idea that AI at the moment is really taking over a lot of like the knowledge worker. And we're going to increasingly see AI then lead into robotics and start to take over the blue collar worker. And so it's going to be outperforming all the economists and the lawyers and the policymakers and the engineers, all of the finances. And so at that point, what happens when three to five
Starting point is 00:25:27 corporations control 40 to 60% of kind of GDP productivity? Like that to me and kind of his points, it is an interesting kind of discussion to have because at that point, governments then become dependent on AGI. At the moment, nations are already kind of in many ways dependent on energy companies. They're dependent on private telecoms. They're dependent on private cloud infrastructure. But what happens when they're also dependent on effectively the workforce, which is just AGI, is the productivity of the entire nation state. Does that mean that we lead into a world where governments lose policymaking power and regulation and such becomes governed by a lot of these entities. So I'm curious to your take on this. One of the things that I think is really lost on
Starting point is 00:26:12 the tech space, people out of Silicon Valley, is the effect of sound money potentially being the new bedrock and what that means for dealing with bad decision making. So what the world has become accustomed to, especially in banking and finance. But if you're a major player in tech, it applies to you as well, because your access to capital is, it feels almost limitless. And we're seeing this with Oracle and all these players that are just conjuring up crazy amounts of money because of where they sit in their access to capital markets, Wall Street, and everything. But now let's play the tape forward. And I'm just going to use Tesla or Elon as an example. If we're on a sound money standard, a Bitcoin standard, where no government, the desirable
Starting point is 00:27:03 units of the people in the global economy, they don't want dollars anymore because they know that they're just going to get to base and lose value against this thing that can't, which would be Bitcoin, right? So we're in that world. And let's say Elon makes a really bad decision with something with the humanoid robots. Maybe there's a safety thing and nobody wants any of these things near them or around them because of call it some situation. I'm just using this as an extreme example to kind of illustrate my point. All of the sudden, the governments can't bail out that company. Let's say it's a strategic company. It's a huge amount of the world's GDP, right? When you're dealing with sound money and sound monetary units, all of the sudden, when you make a bad decision or something bad happens,
Starting point is 00:27:49 all of the sudden, you have to start selling assets. You have to start paying the cost for bad decisions. And this is something that, in my humble opinion, over the last, call it 40 years, the world has never had to deal with at least the major players in the game, have not had to deal with the consequences of bad decisions. And sound money changes all of that. So if he continues to make really good decisions, well, then the capital of these scarce desirable units will continue to flow into the hands of these people making great decisions. But the second that they slip up, and make no mistake about it. Where we're going, the power and the control of like this capital is flowing into the hands
Starting point is 00:28:33 of very few people, like you said. But what's coming with that is ultimate responsibility. And that's what I think they're missing in this conversation is that effect, which nobody's really kind of realized or seen because of how the Fiat system works, I think is going to be drastically different and call it 10 to 15 years from now. And when that plays out, what you're going to see is creative destruction actually happen again in a free and open market kind of way. And I can just tell from their conversations about GDP and some of the other comments that happened in the interview that this idea of using scarce desirable economic units is not on anybody's radar in Silicon Valley. And more importantly, the consequences of mistakes at a grand scale.
Starting point is 00:29:22 Because listen, the bigger this thing grows that these companies grow, the harder they can fall with even the smallest minuscule mistake or slip up. And the thing that's going with this is it's getting bigger and bigger and more and more complex. So one little mistake is going to topple some of this and then the creative destruction is going to actually happen. And I don't think we've seen creative destruction for some of these larger companies for a very, look at J.P. Morgan. Look at some of these companies. They are just like growing behemoths that cannot be shut down. And I mean, that's why you and I are such hardcore Bitcoiners is because I think for the first time that really does bring a counterbalance to reorganize equity and reorganize some of these things in a way that if you make a small mistake, you're going to actually have to pay the price these days or those days in the future. I think there's something that's really important to touch on, which is what you've mentioned around kind of sound money.
Starting point is 00:30:20 and this idea that ultimately in a free market, we still have regulation. We don't have regulation in the sense that you have legislation, a regulators regulating. Instead, you have regulation in that if you don't provide value, capital is no longer going to flow to use. You're no longer going to be able to continue to do the thing that you're doing. Amen. And so kind of expanding on this point, I think what a lot of people don't recognize right now is we have this weird feedback loop, where you've got these entities lobbying government, and the government is incentivizing. funding back their ways, they're able to continue doing this thing that can be unproductive
Starting point is 00:30:54 for society that just wouldn't exist in more of a free market. And I know if you ever listen to Eric Weinstein, kind of the mathematician physicist, he talks a lot about this idea called the distributed idea suppression complex. He coins the turn of disk. And this idea is that science today is massively suppressed. And it's suppressed, not necessarily in a positive word suppression is not positive, but ultimately You've got government funding 50 to 70% of almost every single industry, scientific industry. But immediately they can dictate which trajectory we're on, regardless of whether or not that's what society deems as valuable.
Starting point is 00:31:31 Then you've got the regulatory oversight, which is then kneecapping anyone who's trying to go off on their own trajectory and try and create value for society. So it's basically the government and the regulatory arms are deciding what is deemed a value through their lens, not necessarily through public lens. You've then got the scientific journals, which are these centralized entities that govern what is in the scientific journals, what is able to be spread, what information is deemed valid by mainstream science, and not necessarily, again, what society deems is valuable. And then you've got the peer review system where you've got all these various entities that they'll never peer review a paper if it impacts their bottom line. So it's very hard for new technology, new knowledge to be able to really emerge if that knowledge is detrimental to the existing structure. And then finally, you've got all of these scientists that have this cultural and career incentive where ultimately, if they can't support for their family, if they can't go and pay rent, put food on the table, then they can't necessarily thrive and exist.
Starting point is 00:32:29 So they're willing to just go to where the capital is flowing. But given government controls that capital flow, they're kind of stuck in these kind of designated industries. And so I think what's really fascinating about this idea of sell money, bring it back to sell money, is that when we have more of a free market where capital flows to where value is being created, I think that if you have an entity slip up and they start creating AGI that is detrimental to society or is overriding its ability to be shut down repeatedly, I think that what we will see is capital move away from them and towards someone else who's creating safer AGI. Yes.
Starting point is 00:33:04 Ultimately, you need to have more of a free market as opposed to this controlled information kind of system that we're dealing with right now. Yeah, and I think the thing that's lost is the magnitude of ever. Every decision being made inside of these organizations is growing with the size of the market cap and the influence that they're, you know, wielding in the world. And one little mistake is actually going to have consequences. If we get sound money and Bitcoin becomes the new global settlement layer, the decisions are going to actually have consequences. And I think that that is something that very few are talking about that I think is super
Starting point is 00:33:44 important to kind of understand why maybe this isn't going to be this dystopian one company's ruling the whole planet kind of scenario. Yet another reason why, you know, we cannot work hard enough to make sure Bitcoin's, you know, successful. It's super important as a counterbalance to all this stuff, in my humble opinion. And you know what? I'm very open to people arguing the contra or why that might not be the case. Let us know online. We'd love to hear your arguments. Guys, go listen to this whole interview. This interview is really good. Again, it's called the diary of a CEO, and it's Tristan Harris and Stephen Bartlett. They did a really good job in this discussion. It was very captivating. But the question you had, Sam, when we were kind
Starting point is 00:34:29 of talking through this before we started recording, is, you know, what can we do now to future proof ourselves based on all of this? Like, what can you do right now? What action can you take to protect yourself from all of this. Ultimately, I think this is one of the most important questions. The world we're living in today, there is so much noise and it's really hard to separate out the signal and also recognize like, what does my life look like in five, 10 years time?
Starting point is 00:34:57 And I'm sure, like, I speak for many people. I've spoken to a lot of friends about this. Pre-pandemic, I would say that life felt like I'd clarity moving forward. Post-pandemic, I just feel like, what is that thing? sometimes I have no idea where I see myself in five years time, in 10 years time, which I just never had pre-pandemic. It felt like we were on a bit more of a linear trajectory as opposed to this kind of exponential trajectory. So anyway, I think this discussion, and I'm curious to hear your thoughts as well, Preston, and there's this idea around what can we as individuals do to
Starting point is 00:35:27 future-proof ourselves? And I think that there's a few kind of core things. And ultimately, for me, personally, what I really, really recommend, and this isn't just for ourselves, it's also for our kids. If we're thinking about how do we support our kids who are going through the school system right now or they're kind of coming up into this world. And I think that there are a few things that really give us an edge. And these things that give us an edge, not only give us an edge now, they've given us an edge all the way throughout history. And the first one that I kind of tend to think about is this idea of learn to really think and not just memorize. School really teaches us to kind of regurgitate information, but actually do you have an edge when you're able to
Starting point is 00:36:04 critically think about this information? And so this is especially true in a world where I think AI can recall everything, the valuable skill becomes evaluating information and not just storing this information. And so it's kind of this idea about kind of teaching ourselves and our kids how to ask better questions, how to analyze assumptions, how to challenge narratives. And I know you've talked about this. Jeff Booth has talked about this. It's this whole idea of just kind of first principles thinking. What is actually true here and what assumptions can I remove? And I think I may have spoken about this in one of the previous conversations, but there's a really interesting framework that the founder of Twyote users,
Starting point is 00:36:43 which is called kind of the five Ys. And so whenever anyone in the factory has an issue or he comes up against kind of a hurdle, rather than immediately going to upper management or someone else to try and look for an answer, he wants to train everyone to learn about the five Ys. And so this idea is how to get to the root of an issue. So the first thing you ask is, why is this thing happening? Now you're going to get an answer to that. Usually is not the reason why that thing is happening. Now you've got that answer. You ask, why did I get this answer or why is that thing happening? And then you keep asking why until eventually get to the root cause. And a perfect example of this would be, let's just say, you've got a lot of individuals in society that say,
Starting point is 00:37:20 prices are rising. Well, why are prices rising? Oh, it's because of greedy corporations. And that's as deep as it goes. And so you can understand why people lean into these socialist actions because it's always the greedy corporations. But why are greedy corporations rising prices? Well, their input costs into those resources, those goods, those services are going up. Why are the input costs going up? Well, the monetary units are increasing? Why are the monetary units increasing? Oh, there's a central bank tinkering with interest rates, and basically that affects money supply. And so when you start digging down deeper, you're able to determine what is the root cause of this issue as opposed to paying around in these superficial discussions.
Starting point is 00:37:59 I'm curious to hear your thoughts on that, and then I've got a couple other ideas as well. Yeah, they in Lean Six Sigma, which is an optimization efficiency methodology, that's one of the things that they really hit on heart is all of these whys to dig into the root cause of a production line. It's often used in production lines to make it really cost efficient and effective and you get the best product out of it is when you go in there, you're asking, well, why does that happen? Well, why do I have to do that? And you just continue to follow and dig very deep to get to the first principle. So I agree with that 100%. To answer the question, I have another clip that I want to share that I think addresses this, you know, what can we do to future proof ourselves now? I'm going to play this clip. And this is from Mark Andreessen, which I know isn't the most popular person in Bitcoin circles. But it is an interesting clip. And he's explaining why he thinks being a generalist is going to be really important in the future. So here, I'm going to play the clip.
Starting point is 00:38:57 I think there's basically like two ways to really have a differentiated edge light in general, right? If there's sort of, there's kind of go deeper, go broad. You know, Go Deep has kind of become a more and more specialized expert over time. And, you know, look, there are domains in which that, like, really matters. You know, biotech and working on AI foundation models like that stuff really matters, the deeper you are, the better. I think for most fields, though, now with these new tools, I would probably bet more on basically people who are able to be broad, which is to say, basically, can you know something about a lot of different, you know, kind of aspects of life, and how the world works. And then you can use the tool. You can use the AI to go deep whenever you
Starting point is 00:39:31 need to, but then your job as the human is to basically then cross the domains, cross the disciplines. And look, you see this. If you talk to any like the great CEOs, you kind of see this, which is like, they're really great tech CEOs. They're great product people and they're great sales people and they're great marketing people and they're great legal thinkers and they're great finance people and they're great with investors and they're great with the press. You know, is this sort of multidisciplinary kind of approach. Okay. That's the clip. And just to this point, I mean, if you can go look at an object from three different vantage points, you're going to be so much more effective describing it, drawing it. And really, that's kind of what he's getting at.
Starting point is 00:40:04 If you can study multidisciplinary in your approach and what you're learning, it's going to allow you just to kind of be able to synthesize or piece that together. And then you can use AI to just really kind of drill down into whatever your specific thesis or conclusion where you think what it is to really kind of shine a ton of light on what it is. That doesn't mean that you're getting the right answer, but it's kind of pointing you in the right direction in order to solve things. And I agree with this. I think people that are going to do quite well through a lot of this transition, maybe just early on. I don't know long term, but during the transition, I think the generalist is going to do quite well. Let's take a quick break and hear from today's sponsors.
Starting point is 00:40:45 No, it's not your imagination. Risk and regulation are ramping up and customers now expect proof of security just to do business. That's why VANTA is a game changer. Vanta automates your compliance process and brings compliance, risk, and customer trust together on one AI-powered platform. So whether you're prepping for a SOC 2 or running an enterprise GRC program, VANTA keeps you secure and keeps your deals moving. Instead of chasing spreadsheets and screenshots, VANTA gives you continuous automation across more than 35 security and privacy frameworks. Companies like Ramp and Ryder spend 82% less time on audits with Vantta. That's not just faster compliance.
Starting point is 00:41:28 it's more time for growth. If I were running a startup or scaling a team today, this is exactly the type of platform I'd want in place. Get started at vanta.com slash billionaires. That's vanta.com slash billionaires. Ever wanted to explore the world of online trading, but haven't dared try? The futures market is more active now than ever before, and plus 500 futures is the perfect place to start. Plus 500 gives you access to a wide range of instruments, the S&P 500, NASDAQ, Bitcoin, gas, and much more. Explore equity indices, energy, metals, 4X, crypto, and beyond. With a simple and intuitive platform, you can trade from anywhere, right from your phone. Deposit with a minimum of $100 and experience the fast, accessible futures trading you've been waiting for.
Starting point is 00:42:21 See a trading opportunity, you'll be able to trade it in just two clicks once your account. is open. Not sure if you're ready, not a problem. Plus 500 gives you an unlimited, risk-free demo account with charts and analytic tools for you to practice on. With over 20 years of experience, Plus 500 is your gateway to the markets. Visit plus 500.com to learn more. Trading in futures involves risk of loss and is not suitable for everyone. Not all applicants will qualify. Plus 500, it's trading with a plus. investors don't typically park their cash in high-yield savings accounts. Instead, they often use one of the premier passive income strategies for institutional investors, private credit.
Starting point is 00:43:07 Now, the same passive income strategy is available to investors of all sizes thanks to the Fundrise income fund, which has more than $600 million invested in a 7.97% distribution rate. With traditional savings yields falling, it's no wonder private credit has grown to be a trillion asset class in the last few years. Visit fundrise.com slash WSB to invest in the Fundrise income fund in just minutes. The fund's total return in 2025 was 8%, and the average annual total return since inception is 7.8%. Past performance does not guarantee future results, current distribution rate as of 1231, 2025.
Starting point is 00:43:49 Carefully consider the investment material before investing, including objectives, risks, charges, and expenses. This and other information can be found in the income funds prospectus at fundrise.com slash income. This is a paid advertisement. All right. Back to the show. What I've found just like unbelievably fascinating about this whole transition which we're
Starting point is 00:44:11 seeing is that throughout history, the people that have really succeeded in society have been specialists. Yeah. So if we go back even pre-technology, we go back to, I don't know, smaller little feudal systems, you've got the baker and then you got the blacksmith and then you got the guy that trains the horses and then you've got all of these individuals. You've got one specific role and it is your job to be good at that role and people come to you for that thing. But what has been really interesting is seeing the rise of technology is this idea of kind of the multi-potentialite. It's kind of
Starting point is 00:44:42 like this individual that has a broad swath of knowledge and then they rely on the specialists to help implement these ideas that kind of come to them that kind of spread across multiple disciplines. And so I think that this is increasingly where society is heading is that, like, be curious, like be creative, figure out this like cross-disciplinary insight and combine fields that AI doesn't naturally merge. I think AI, because it's taking an information, it's giving you outputs from information that's already consumed. If you want to have an edge as an individual, be curious, start reading all these books. And I think that what you and I always have conversations about and personal conversations, Preston, is this idea that as curious individuals, the more books you read,
Starting point is 00:45:22 the more you can build out this puzzle that is the world and you start to see these connections that other people aren't necessarily seeing. And so I think that the world continues to thrive, the more curious people are. That I think is absolutely, absolutely important. And I think that kind of building on this idea, I think that kind of the next thing that I always kind of tend to recommend, and this is something that I've noticed has had a huge difference in my life, is this idea of kind of training your attention. Because I think in a world where technology is advancing so quickly and we've got social
Starting point is 00:45:52 media, we've got all of these things vying for our attention, all of these stimulus. I think attention is ultimately one of our most important currencies. And so if algorithms are constantly fighting to kind of capture and hijack your focus, I think those who are able to focus and they're able to guard their attention, focus on things like long form reading, reading books, focus on writing, focus on long form conversations, listen to two, three hour podcast, don't just listen to a one-minute clip of something, craftsmanship, like philosophy, negotiation, leadership, These are all things that are like kind of slow skills develop through training your attention as opposed to getting caught up in the newest things because it's very hard to compete in a world where you can only direct 20 seconds of your attention. Yeah.
Starting point is 00:46:36 I think it's so important to pay attention to the source of where you're receiving that flow of information as well. Because if you just think of it from a filtering standpoint, do I want to go listen to what Elon Musk is focusing on right now? Or do I want to go listen to my neighbor Joe Schmoe who just has random opinions about whatever? And the answer is really obvious. You want to listen to what the person who's really kind of having a huge impact on the world is looking at, why they're looking at it, you know, what they're trying to do with that. And I think that that's also going to be another thing to kind of, because the next question is, okay, so now like, what do I put my attention on? And I think that that can also be an extremely helpful thing for people to
Starting point is 00:47:22 think about as to like, who am I using as my compression source to point my attention at? Let's go ahead and go on to our next topic. So on the last time we chatted, said we were talking about StarCloud and this data centers in space. And since that conversation, we've had some people throw us some really interesting comments online. And I had an interesting interaction this past weekend. But to just kind of kick this off, I'm going to bring up a post from X that you sent over to me. And I'm going to throw it over to you to take this away for the intro. So digging into kind of this star cloud, and this is kind of like two topics in one. I'll get the second topic in a second, but it's this idea that basically this Washington-based StarCloud launched a satellite
Starting point is 00:48:06 with the Navidia 8100 graphics processing unit in early November, and this chip is 100 times more powerful than any GPU compute that's been sent up into space up until now. And so this StarCloud satellite is now running and querying responses, and from my understanding, it's the first time we've ever been able to train AI in space. This is kind of like a huge hurdle. But the thing that it kind of sent me down this rabbit hole is kind of the second point on this. And this was related to a comet that someone made in relation to our previous episode. And it's this idea of how costly is it to even get this material, these satellites, up into space.
Starting point is 00:48:47 And so, and just for context, in the last episode, we were saying that the price to get it up into orbit has to drop 10x. And we were both kind of at the time, we were just kind of like, I don't know if that's possible or what. And then we had a comment from somebody online is like, not only is it possible, but the expectation is that it's going to drop 100x and in maybe short order. Sorry to interrupt. Keep going. No, no, no. Yeah. And so we started doing a little bit of digging and we stumbled across this chart.
Starting point is 00:49:14 And so maybe bring up the chart Preston. Okay. So this little chart here goes all the way back to like 1962. And what it is highlighting is the cost of moving one kilo of weight up into space. And we're talking about like hundreds of thousands of dollars to be able to move just a single kilo. And right now it is currently costing the Falcon Heavy by SpaceX. It's currently costing getting up into low Earth orbit, $1,400 per kilo, which already is just like, you've dropped it. That's a 99% reduction.
Starting point is 00:49:46 And what we're trying to move towards right now, and this is what kind of Elon discusses, is we're trying to move towards this kind of super heavy starship booster is going to drop that to $250 to $600 per kilo. If you start reusing this super heavy starship booster, it's going to start dropping it to under $100 per kilo. And if you can get 70 flights from the same booster, we're getting down to around $10 per kilo. But anyway, before we jumped on this call, Preston and I were just kind of riffing back and forth about this. And Preston gave me a flag. And this to me, I'm going to pass it to you because to me, it just blew my mind. Well, before I tell this story, so just for the listener, if you're not seeing the video here,
Starting point is 00:50:28 I have my cursor over the space shuttle. 1981 is when the space shuttle started being used. And the cost to put something in the orbit was, or cost per launch with the payload was $65,400 per kilogram to put something into space. So, Seth and I were talking before recording this like, oh, yeah, I've got this, you know, on my bookcase back here. I was like, I got a flag back here that was flown into space that was given to me whenever I was a cadet at West Point. And he goes, no way. And I said, yeah, and I got it down. And he's like, oh, my God, you got to show this on the show. And I've never talked about this in over a decade of podcasting. But going into my senior year, I did an internship at NASA. I worked. I got the opportunity to work right in the astronaut office, which was really neat. I was just surrounded by all the NASA astronauts at Johnson Space Center. And this is what they gave me whenever I left after that summer. And you can. can see it signed by, and I'm sorry if you're just listening to this, but on the video, I'm
Starting point is 00:51:26 showing this document or this little certificate that has all their signatures there. And it was really neat. I think it was every Wednesday they had an astronaut like flight briefing that they were all in there. I got to go in and sit. There was, you know, I don't even know how many at the time, at least 50 of them in the room, just kind of sitting there talking about like what was going on and whose mission was coming up next, things that they learned, what was happening on the flight line, law. And so they just took this and they passed it around the table on that Wednesday meeting and they all signed it. And then the little flag here was flown in space. And on the little thing, it says this United States flag was flown aboard the space shuttle Atlantis,
Starting point is 00:52:06 STS 101 from May 19th through the 29th of 2000. So yeah. And it was at the time when they gave this to me, they're like, yeah, that little flag, it probably cost like a thousand dollars to fly in the space or whatever the number was from back in the day. And I was always like, wow, That's so amazing. Yeah, that's my story. Like me, it just, it blows me away because I've just like, that little flag has been up into space and the cost of sending that flag to space. Let's just say it was $1,000. Today, we are, well, not necessarily today, but coming into the near future, we may be able to see $10 per kilo of weight.
Starting point is 00:52:42 Yeah. Just to put that in perspective, because I think sometimes we come across these posts and we hear Elon saying, we want to get it down $10 per kilo. But we don't have any reference point for just how. insane that is. And maybe just even to kind of like give another reference point, Canada, if I use Canada Post and I want to ship one of my books, one of my books that weighs like, I think it weighs maybe half a kilo, I want to ship that across Canada. It cost me $30.30. Yeah. That is absolutely useless. Canada Post. That's government efficiency for you. But just highlighting that it may in time be cheaper to move stuff up into space than it currently is to move stuff
Starting point is 00:53:19 with Canada Post across Canada. I mean, I think that's a certainty, Seb. Okay, I want to play the clip real fast of Elon talking about this, just so everybody can kind of hear it straight from the guy that's doing it. With our Falcon Rocket, we are able to reuse the main stage and the nose cone, but we're not able to reuse the upper stage. And it still takes us at least a few days from when the main stage lands to when we can fly it again. So it's not fully reusable because we lose the upper stage,
Starting point is 00:53:50 which costs $10 million. And then the main stage, it's not as reusable as like an aircraft. You can't just like refuel it and fly. It requires work for a couple of days. The Starship design is the first design that is capable of full and rapid reusability, where that is one of the possible outcomes. And once you have full and rapid reusability, the cost of access to space drops by a factor of 100.
Starting point is 00:54:14 So there it is. Pretty phenomenal. This was just pure serendipity. I was in Baltimore this past week for the Army Navy football game, and I ran into a good friend, Tim Copra, who's been on the show. We talked about him actually on the last episode. I had no idea I was going to see him at the game. We were able to spend quite a bit of time together, and he's a former NASA astronaut. And of course, I immediately bring some of this stuff up, Tim. And I said, so what is like reality check here? Like, how hard is it going to be to do these data centers in space? And he did not. like write this off as not being viable by any shape of the imagination. The one thing that he did highlight that he thought was a concern and would maybe be the critical path to it is the cooling concerns in how you're going to be able to manage the thermal cooling of the data centers up there. And he implied that there really wasn't, he hadn't seen something that was a viable roadmap.
Starting point is 00:55:14 To this point, I did find a person that was talking about. this online and I'm going to pull up the tweet. And this gentleman, Vlad, he's basically laying out how some of the things that they saw with Starlink V3 and how it's scaling, using first principles going from a 20 kilowatt to 100 kilowatt is a reference point to how they can actually solve the cooling problem. And he's saying that he doesn't think that it's that big of a deal. I don't really know what his background is. But more importantly, Elon responds to him. And this is how he responded. He says, SpaceX has ever 9,000 satellites orbiting Earth right now,
Starting point is 00:55:53 which is twice as many as the rest of the world combined. So maybe we know a thing or two about the subject. I don't know what to say other than like this guy is truly on a whole different level. I find it all to be crazy fascinating. Just to kind of like add to more context or just more interesting whatever. So SpaceX, it's been announced that SpaceX is going to IPO maybe by the summer and that the valuation on the company is going to be around $1.5 trillion. So my immediate question to a couple of people that I saw over the weekend, one being the CEO of a space company, not SpaceX, but a space company that IPO this past year. I said, so like, why would he be selling his equity?
Starting point is 00:56:42 Why is he selling? is really the question. And it seems like there's a lot of marketing kind of hitting right now with all these data centers in space being shared by, you know, Google and everybody else, Jensen Huang and everybody talking about it. And oh, by the way, we're doing this IPO and we're going to sell some of our equity of SpaceX. You got to ask yourself, why, all right? And I'm not saying that they aren't trying to raise the capital for a good reason or whatever or that this isn't possible. I'm not saying that. I just find, it interesting in that a guy that could go out and raise a whole bunch of money at a very low interest rate, or what I would suspect could be a very low interest rate, is instead selling equity of the business in order to do it. And the question that I did not get a good answered for from anybody is, why is he selling the equity? So, Seb, you have any, why is he selling the equity, right? What I do think is interesting, I don't necessarily know if I've got an intelligent answer to that question. What I do think interesting is, again, putting it in perspective,
Starting point is 00:57:46 I think, from what I understand, Saudi Aramco was the previous largest IPO. And they were 29 billion. And this is going to one and a half trillion. Yeah. Crazy. And that is the second or currently the largest, the largest IPO ever in history. We're talking 45 times bigger. That's insane, but absolutely insane. And so I think that either we've experienced massive hyperinflation, which I don't think that has happened. Or there is, this is kind of, again, like a canary in the coal mine as to where is society shifting right now? Where is monetary energy moving towards? And I'm curious to see if it can actually sustain this $1.5 trillion IPO. But I think going back quickly for one second, I did a little bit of research into this kind of calling by this guy, Vlad Saigou, I think it's name. Yeah. Yeah.
Starting point is 00:58:32 And one of the things that I found really interesting is a lot of people are discussing this idea. they're like, oh man, how is it going to be possible? Cool stuff in space. There's all of these other input factors that we can't necessarily control. And when I did a little bit of digging, in many ways, it's actually the inverse of Earth. The problem with Earth is that we live in a very unpredictable environment. And so on Earth, you're fighting the ambient temperature that's moving up and down. You've got seasonal changes. You've got humidity. You've got dust. You've got weather. You've got vibrations. You've got seismic activity. You've got grid reliability. In orbit, you can pick your thermal environment and it's going to stay consistent.
Starting point is 00:59:11 You can pick your sun shade cycle. So you know exactly, you can with almost 100% confidence know exactly how much sun you're going to get through any period of time. You can pick your surface temperature. You can pick your radiation angles. And so if anything, being up in space, I think the hardest part, which we're seeing is changing rapidly, is actually getting this technology up into space. And then once it's in space, I think that it's a very different environment on Earth because you've got this kind of predictable environment. So we no longer have this problem of kind of calling it an unpredictable environment.
Starting point is 00:59:45 Instead, the whole environment is programmable. And so I find that really, really fascinating. Yeah. I have it. It's so funny. How over we go. How many topics we had lined up for this discussion. Let's move through some of these really fast because I think they're super interesting.
Starting point is 01:00:02 This next one we're going to go through very fast. Shamath, he shared this tweet very recently, and he says, to make the math simple, and we're talking about Waymo versus Tesla. And he's saying to make this math really simple, the car on the left, which is the Tesla is 25K to build. The car on the right is 150,000 to build. And it's this graphic that shows where the sensor type and the number of sensors that are on each car. And it's just straight up laughable. The difference in what Elon's doing with the Tesla car and its autonomous driving with eight sensors and they're all just cameras versus Waymo with 40 sensors, six that are radar, five that are LiDAR, and then 29 cameras. And not to mention, it's very ugly. And the point of his post is from a cost standpoint, like he's just going to annihilate the competition. But where this could get interesting is on the the safety front is if the one, the Waymo goes into the political landscape and let's say that
Starting point is 01:01:08 they can verifiably prove that they're X number times safer. Now you got this interesting dynamic where politically different jurisdictions could say, oh yeah, we don't allow that car in here because it's not safe enough. Look at how this other company is doing it, which is 10 times safer to the residence inside of our jurisdiction. And I think that that's a really interesting talking point and interesting thing that he's going to have to deal with. And, you know, he might have figured it all out from a math standpoint and something that's safer than a human driver. I guess you have to also consider, is it safer than what else could be done? And we'll move on to the next topic unless you have any, like, keen insights on that one, Seb, just for the
Starting point is 01:01:52 interest of time. Honestly, the only point that pops up is just like when you're looking at Waymo and when you're looking at Tesla, and we discussed this last time, is maybe jump back. back to the other episode of Furious More on this kind of topic. But LiDAR is extremely accurate with depth, works well in low light, great for like precise navigation, but it's unbelievably expensive. And what Tesla is trying to do is just a vision only. And it's like a long-term bet that AI is going to improve, where it's able to interpret its environment far more accurately and make far more realistic decisions similar to a human. And I think that the thing that's
Starting point is 01:02:26 interesting is like, you're looking at these two cars and it kind of reminds me a little bit in some ways of like Blockbuster versus Netflix. And you've got Tesla Netflix, which is kind of on this path towards like meeting the user, advancing society. And then you've got Blockbuster, which when it's being taken over by Netflix, ultimately it's like, how do we attract people? We need to put candy in the aisles. And I feel like Waymo is just like, let's just put another red light on the top.
Starting point is 01:02:54 Let's just put three more little cameras. And so I do think it's interesting, but I have to say that's not coming from an informed place. I don't understand the depth of technology on these Waymo cars. Yeah. I mean, at the end of the day, I think more, more data that you can collect, the better it's going to make, it's just going to have more information to make a more informed decision. It's just like, what scale are you getting that it's just 2% better or it's 1% better, but you have to pay 10 times the amount, right? Like, it comes back to this classic engineering quandary, which is, can it do better? And the answer is yes. I mean, you see this with just the
Starting point is 01:03:34 space race. Like, when you looked at, I forget the name of the organization that Elon's competing with, it's like this conglomerate of Lockheed and I think Boeing that formed like this alliance. And their requirement was no mission failure, no matter what, we can never have a failed mission launch. Well, the reason their cost is 100x Elon is because of this requirement of a no fail, nothing can possibly go wrong. And Elon's kind of looking at it like, well, that's just over engineered. Like, there can be failures. Then we'll optimize those. And so I think you have the same thing happening with that particular comment.
Starting point is 01:04:12 And the really question is, is how much better would the driving be with all these extra sensors and relative to the cost that it's going to take to do it? And in the end, I don't think it's going to matter because I think he's going to go out there with so much volume and so much scale that is just going to be an absolute clinic. And the competition is, they're just going to be too far behind to ever catch up to where I think he's about to go with all this. Totally. And I think that the other thing is like, Claude Shannon talks about like information and exformation. And it's just like what is you can have more information, but in that information is so much noise. And so at what point, like to your point, is kind of like the drop off where, more information is only actually creating noise and it's reducing the signal.
Starting point is 01:04:58 Okay. This next one is really interesting. I'm sharing a post that somebody had here and the post says, Elon Musk has reportedly requested his own personal office workspace inside Samsung's semiconductor fab site in Texas. And Elon has previously mentioned he will personally walk the factory line to accelerate progress. And then I found this really interesting comment where somebody else is also talking about this AI-5 chip, which is their inference chip. It's like an ASIC.
Starting point is 01:05:30 It's very specific for inference in AI for his cars and for the humanoid robots. That's what it's going to be used for. And then Elon responded to this person. He says, AI-5 and AI-6 engineering is my biggest time allocation at Tesla. AI-5 will be good. AI-6 will be great. So the reason I'm highlighting this is I don't think that this gets a lot of attention. Like you typically see the really swoopty things like the rocket launches and the cars driving themselves down the streets.
Starting point is 01:06:03 But when you look at him and all of the things that he's doing for him to say this is like his number one priority, for me is a really, really interesting thing. So I started digging into this AI5 chip. This is going to be 40x the performance increase over the existing chips that he has in the cars. It's eight times more raw compute. It's nine times more memory capacity. The other thing that I think is it's five times more bandwidth increase. And like all of these things, when you look at it, he's putting it in the car and he's putting it in the humanoid robot.
Starting point is 01:06:40 And these numbers that I'm quoting you are just for the five. It's not even the six, which he's saying is going to be leaps and bounds over the five. So one other point that I think is interesting is he's not using Nvidia chips. He's going upstream and doing his own thing and using that in a very specialized, specific way that how in the world are you going to compete with this? That's my question for anybody dabbling in any of these spaces. How in the world are you going to compete with this? There's two points that kind of come to mind around this kind of chip topic. And it's kind of like one, I think sometimes where we struggle is we were just talking about
Starting point is 01:07:21 this information. We can get more and more information. But at the moment, I think AI, autonomy, self-driving, all of these things, they're kind of like head of wall. And it is not because of the software. It's because the hardware can't process everything fast enough. And so when you've got these chips processing at 40 to 80,000 times, you're going to, 80 times faster than what we're using today. That means that they're able to see if we take
Starting point is 01:07:44 like Tesla cars, like other cars, pedestrians, weather, road lines, lights, all of these things that's able to interpret that information to distill it down and figure out the signal far, far, far quicker than what we can currently do today. And I think kind of the second point that I find even more fascinating is this idea of it's moving a lot of this stuff in-house. Now, from what I understand digging into it, they're still going to be using third parties like Samsung and TSM to actually make the chips, but they're no longer relying on Navidia or Navidia to design these chips. Instead, they're kind of taking on chip architecture, interconnects, compilers, runtime, neural networking inference engines, firmware, training pipeline, all of these things
Starting point is 01:08:28 is coming in-house. And so if it's coming in-house, this is where I think that it's really interesting because an analogy that kind of came to mind is when you think about people in comparison to Microsoft or anyone else. Apple really started crushing the market when it brought a lot of its chips in-house. Apple's M1 chip delivered twice the performance of the Intel chips at 25% of the power. And Apple's M2 and M3 chips beat Intel and AMD by per watt three to four times. And so because they control the chip and the operating system and the compiler and the memory, there's so much more of a cohesive design and they can operate far more efficiently than any of their competitors. And you and I kind of talked about it very
Starting point is 01:09:10 briefly before we jumped on, this idea that what we're seeing is Elon is kind of bringing everything in-house, like he's got SpaceX and he's got Starlink and he's got Tesla and he's got Neurilink and he's got Grok and he's got the boring company, he's got Tesla energy. All of these things he's trying to control because he wants control over the output. He wants to minimize operating costs you want to maximize efficiency. And so that's what I think is so cool about him bringing the chips in-house, is it's just going to create far more efficiency and control. I just wonder, and I saw somebody post this online, of potentially using the cars.
Starting point is 01:09:43 Like, let's say I personally own the Tesla. Let's say that I agree to while it's charging in my garage at night, that you could be using these inference chips in order to conduct, you know, AI energy consumption tasks, compute tasks. compute tasks for queries, for grok, call it. I suspect this is maybe not possible, but the question was just fascinating because when you think of how many cars are out there and how often they're just sitting there idle, but you have this super powerful ASIC that's optimized for inference.
Starting point is 01:10:17 And I think maybe the other reason why it might not be possible, I think it's a very specialized inference for image intelligence as opposed to large language model or are other forms of intelligence. I think it's geared specifically for just image generation, right, and understanding where you're at in space and time in order to drive a car. So I don't know if it can be reallocated to these other use cases or whether that would be something that would even be entertained. But I think when you just add up the number of cars and the fact that they're all networked, maybe there's something there that could be used for, I mean, maybe he could power video games or something. I don't know.
Starting point is 01:10:59 That's fascinating, actually. Right. You wonder, I think the issue right now would be, let's just say, I don't know, the human brain, from my understanding, I think the brain uses like 30% of our energy requirements. And so our brain consumes a lot of energy. Now, how efficient is a car? If you've got a car turned off, but you've got its chip running while it's parked in the garage, how much is that going to cost to run that chip overnight, doing some other task? And then what happens if Elon is like, okay, you know what? It's costing this much to go and build all these data centers, but we've got all of this compute power just sitting idle.
Starting point is 01:11:34 We're willing to give you, I don't know, five cents per hour your compute, which then covers your cost of electricity. Yeah. So people can then monetize their car, not even from a taxi standpoint, but from just a compute standpoint. What happens when you can start hashing and frigging Bitcoin with your car overnight and you're just generating revenue while you're asleep? I think that's kind of fascinating. Yeah. I think there's something there because it's just a powerful resource that's sitting for all intents of purposes would be sitting idle. You know, I, geez, man, your imagination is truly the limiting factor with where this is all going.
Starting point is 01:12:12 Let's go ahead and wrap there. Seb, we've been saying we can just keep going on and on because this stuff is just endless. I know you and I wanted to kind of talk about how when you look at everything that Elon's doing and how it seems to like all be converging. and the foresight that he would have had to kind of piecemeal all these different business entities together to kind of point it all at this pinpoint moment in time where I think the general public is seeing it all converge. I just can't imagine what was in his head 10, 15, 20 years ago in order to bring all of this to this point. But maybe we'll save that for another day. Absolutely. You know what?
Starting point is 01:12:51 I think that's a whole topic of discussion in itself looking at like how does Elon think. How does he plan this stuff in advance? Like, to your point, like, what was his mind thinking about 10, 20, 30 years ago to be able to kind of think about all of these little nodes that have to come together to be able to be where we are right now? Yeah. I mean, I'm just struggling to make sure I pick up the kids from school on time and things like that, right?
Starting point is 01:13:14 Like, all right. Seth, give people a hand off to your book or anything else that you want to highlight. Yeah. Again, like, I really appreciate everyone kind of giving us the listen. And we've had a handful of people kind of reach out and just kind of. mention their topics that they're fascinated about, and we've tried to bring them into these discussion points. So if there's anything that you guys are interested about, anything that you want further clarity on or you want us to dig into, feel free to just comment on
Starting point is 01:13:39 Twitter or on YouTube, and we definitely scan those comments. But again, like, you can find me at said bunny, B-U-N-N-E-Y.com. That's my website and vlog, or you can find me on Twitter, and my book is the hidden cost of money, which touches on kind of subjects adjacent to this, more related to money. But again, I appreciate you having me on press, and it's always, It's awesome to chat. Well, guys, I hope you guys are enjoying it as much as Seb and I are, because this is a blast. And hopefully we're rounding up and filtering the high signal things that are happening in tech. And we'll keep doing this.
Starting point is 01:14:09 I don't know, at least once a month to just kind of show you guys what's happening in the world and how exciting it all is. So thanks for joining us. And until next time, thanks for listening. Thanks for listening to TIP. Follow Infinite Tech on your favorite podcast app and visit the Investors Podcast. podcast.com for show notes and educational resources. This podcast is for informational and entertainment purposes only and does not provide financial, investment, tax or legal advice. The content is impersonal and does not consider your objectives, financial situation or needs. Investing involves
Starting point is 01:14:42 risk, including possible loss of principle and past performance is not a guarantee of future results. Listeners should do their own research and consult a qualified professional before making any financial decisions. Nothing on this show is a recommendation or solicitation to buy or sell any security or other financial product. Hosts, guests, and the Investors Podcast Network may hold positions in securities discussed and may change those positions at any time without notice. References to any third-party products, services or advertisers do not constitute endorsements, and the Investors Podcast Network is not responsible for any claims made by them. Copyright by the Investors Podcast Network. All rights reserved.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.