We Study Billionaires - The Investor’s Podcast Network - TECH008: Emerging Tech Overview: Driverless Cars, Image Generation, Energy Infrastructure w/ Seb Bunney (Tech Podcast)

Episode Date: December 3, 2025

Seb and Preston explore Tesla's FSD 14.2 advancements and their implications for AI-driven autonomy. They also tackle the ethical, societal, and infrastructural challenges of rapid AI development—fr...om brain-inspired computing to nuclear energy’s role in supporting AGI. IN THIS EPISODE YOU’LL LEARN: 00:00:00 - Intro 00:01:44 - How Tesla’s FSD 14.2 dramatically improved its autonomous driving performance 00:13:42 - The ethical dilemmas and liability concerns around AI decision-making 00:20:27 - Tesla’s sensor-only approach versus LiDAR-heavy systems like Waymo 00:27:31- The potential of biologically-inspired artificial neurons 00:30:32 - How brain-computer interfaces could revolutionize AI and prosthetics 00:32:28 - The societal risks of tech-enhanced human capabilities 00:36:26 - How AI image generation tools like Google’s Nano Banana Pro are evolving 00:49:37 - Why AI’s energy demands are influencing nuclear power policy 01:00:06 - The risks of AI-induced content homogenization and “AI slop” 01:07:22 - Why some are turning to manual trades to escape AI disruption Disclaimer: Slight discrepancies in the timestamps may occur due to podcast platform differences. BOOKS AND RESOURCES Seb’s book: ⁠⁠The Hidden Cost of Money⁠⁠. X Account: ⁠Seb Bunney⁠. Related⁠⁠⁠⁠⁠⁠⁠⁠ books⁠⁠⁠⁠⁠⁠⁠⁠ mentioned in the podcast. Ad-free episodes on our⁠⁠⁠⁠⁠⁠⁠⁠ Premium Feed⁠⁠⁠⁠⁠⁠⁠⁠. NEW TO THE SHOW? Join the exclusive ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TIP Mastermind Community⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members. Follow our official social media accounts: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠X (Twitter)⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Facebook⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TikTok⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Check out our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Bitcoin Fundamentals Starter Packs⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Browse through all our episodes (complete with transcripts) ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Try our tool for picking stock winners and managing our portfolios: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TIP Finance Tool⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Enjoy exclusive perks from our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠favorite Apps and Services⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Get smarter about valuing businesses in just a few minutes each week through our newsletter, ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Intrinsic Value Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Learn how to better start, manage, and grow your business with the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠best business podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. SPONSORS Support our free podcast by supporting our ⁠⁠⁠⁠⁠⁠⁠⁠⁠sponsors⁠⁠⁠⁠⁠⁠⁠⁠⁠: ⁠Simple Mining⁠ ⁠Human Rights Foundation⁠ ⁠Unchained⁠ ⁠HardBlock⁠ ⁠Linkedin Talent Solutions⁠ ⁠Kubera⁠ ⁠Vanta⁠ ⁠reMarkable⁠ ⁠Onramp⁠ ⁠Public.com⁠ - See the full disclaimer ⁠here⁠. Netsuite⁠ ⁠Shopify⁠ ⁠Abundant Mines⁠ Horizon⁠ Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm

Transcript
Discussion (0)
Starting point is 00:00:00 You're listening to TIP. Hey everyone, welcome to this Wednesday's release of Infinite Tech. Today, Seb Bunny and I unpack the biggest innovations hitting the tech world right now from AI breakthroughs, robotics, brain computer interfaces to the energy infrastructure powering it all. We know this space is moving crazy fast and new headlines are constantly hitting the wire, but our intent is to bring you the biggest impact stories that are happening now. So without further delay, here's my chat with Seb.
Starting point is 00:00:30 You're listening to Infinite Tech by the Investors Podcast Network, hosted by Preston Pish. We explore Bitcoin, AI, robotics, longevity, and other exponential technologies through a lens of abundance and sound money. Join us as we connect the breakthroughs shaping the next decade and beyond, empowering you to harness the future today. And now, here's your host, Preston Pish. Hey, everyone, welcome to the show. I'm here with Seb Bun. And we've got a fun one for you because we're going to go through a bunch of different things that we've both been curious about, things that we are seeing online, things that were just kind of blown away on the tech front. And yeah, we're excited to bring this one to you. So, Seb, any opening comments before we just dive right into some of these? I would say for people that have listened to a couple of the episodes we've done so far, we've been kind of reviewing tech books. And surprisingly, and I'm not sure on your thoughts, Preston, but it's surprisingly high. hard to find really good tech books that kind of open your eyes and on top of that give you a
Starting point is 00:01:45 lot to talk about. And so if anyone does have any books, feel free to reach out to us and we'd love to hear those books. We're always down to review a book. But more than anything, we just wanted to kind of dive into like, what is happening in the world today? And some of that will take a long time to make it into books. So we thought, let's just go straight to the source and see what's happening. Well, it's funny because we've started two different books since the last show and we got probably, I don't know, 30% of the way through each of them and we texted each other and we're like, I don't know if we can do an entire episode on this particular topic. The one was about quantum and it was very obscure and we're just kind of like, yeah, I don't
Starting point is 00:02:24 know. So we're going to take this in a different direction today and we're going to just kind of highlight some really fascinating things that are happening. The first one that I pulled up is just this Tesla FSD14.2 that was recently released. And the comments that I'm seeing online in reference to this autopilot. And what I'm going to do is I'm going to pull up and bring up some videos that people are posting online for people that are watching the videos side of this are going to be able to see it. Seb and I will do our best to kind of explain what this looks like for the audio listener. But the first video that I'm going to pull up here is one that somebody is just showing how superior the software is on just, you know, animals crossing in
Starting point is 00:03:12 front of the vehicle. And one of the other updates that I've heard is just drastically different than the previous versions is, I guess, blowing leaves would mess up or hang up the AI on board the Tesla vehicle in the past. And now, I guess, on this latest update, that is not the case. But people can see the screen right now. I'm just kind of playing a video. And there was There's a deer, the car veered out, like right at the last minute, veered out of the way. Another, I don't know what that was. Seb, are you able to see what I'm playing? Yeah, yeah.
Starting point is 00:03:44 Here's a moose literally walking across the road just out of nowhere in front of the car. It slows down and does the right thing. Literally this feed is seven minutes. There's an alligator crossing the street. So the point that the person I think was making with the video is just showing the diversity of different things that can just go wrong that a human, you know, just we don't even think about the fact that a deer looks different than a cat that looks different than an alligator crossing the street. And, you know, if you were coding if-then statements on something like
Starting point is 00:04:17 this, it would literally be impossible. Like you could never get to the point where you could have software out there that would be covering all of these different edge cases. And the latest version is putting it on full display. Okay, so the video I really want to show, Seb, is this one, and I'm going to play the sound. I don't know, Seb, if you're going to be able to hear it, but I think the audience is going to hear it in the recording. And this is a video of somebody using 14.2 in a Tesla in Times Square. And they have this, I guess, in what is referred to as the Mad Max mode, which they've brought back. I guess they had it out and then they pulled it back. The code that's running here, the AI code that's running on the car, is driving as if it's an aggressive, confident, I think is maybe the word that they would want, a confident driver in New York City.
Starting point is 00:05:11 And so I'm going to have the sound on so hopefully you said you can hear this. Unsupervised era now. Now changing the lanes, saw that garbage truck. Do you want to get stuck me into garbage truck? Human driver is still standing there using their phone. Oh, wow. I saw that person just using phone. Don't even care if I like change.
Starting point is 00:05:26 Oh, this is crazy. Beautiful. This car knows how to drive in New York City. Oh, we look at this cat's got his house out. Okay, yeah, so look at this. Changing lanes out. This is the thing I like. Did you see what?
Starting point is 00:05:36 It was indicating to move over. Yeah. Then it looked like the captain was going to get out of the way. But then he was still there. So it turned, it's blinker on again and moved over. Its ability to kind of change its mind if the situation changes and abort the lane change is pretty powerful. This is crazy. This is some of the most intense driving.
Starting point is 00:05:52 Yeah. Yeah, we got a petty cab. We got a bus. Cut in between the lanes like this too. Oh, beautiful. That's got that. That's the kind of thing that just puts a smile on your faces. It's satisfying.
Starting point is 00:06:00 It's like, yes, that's what I do when I drive. I go for the empty spaces. Yep. Oh, this guy almost got his whole fun going to take it off. Oh, look at this. I'm not going to give you a space. Human pilot. Wow.
Starting point is 00:06:08 I'm not going to give you a human pilot intervention. Oh, my God. Oh, it's such a satisfying drive so far. Okay, so I'm going to try to describe it. I'm sure the listener is hearing kind of the comments of, you know, the people in the car just losing their mind because the car is just weaving in and out and just kind of really navigating itself in probably one of the most difficult driving scenarios that you could imagine. And doing it very effortlessly, they don't seem to be
Starting point is 00:06:33 too concerned as to like whether they need to grab the wheel or not. And the car is driving, I would say, as if somebody with 20 years of experience plus behind the wheel and just kind of going around. And there's another clip. I don't know where I kind of lost sight of where it was at, but I saw this clip where the car was also in New York City comes up. There's like an extremely tight space between two cars and the car goes up, it stops. It's almost like it assesses down to the millimeter of whether it can go through that gap. And then it slowly proceeded through the gap and got through, which I'm telling you, having watched the video, there's no way a human driver would have tried to go through this gap. But because the car had so much sensing
Starting point is 00:07:18 capability of its left and right limits, it still proceeded through this tiny little gap between the other cars. So, Seb, your initial thoughts, like, what are we witnessing here? What are we looking at? In my mind, what blows me away is that I think AI is one of the first consequential tethers of kind of AI to reality. I think up until now, we're kind of talking to these large language models. They're having output, but that output isn't necessarily consequential, as there's a delay from that output being used in the world that we actually live in. And what I find so fascinating about these is that, like, self-driving systems, are taking millions of data points per second, projecting trajectories of like dozens of these
Starting point is 00:08:00 various agents, things, moving vehicles, animals, and then it's deciding optimal actions all within like milliseconds. And so this in my mind is the first time that we've seen technology really making like life critical decisions in the physical world at scale. And that to me, I don't know, I'm just in awe watching this stuff. And it's wild just to see it expand over time. I'm curious to see, like, as you've been diving down kind of these rabbit holes or seeing this, what is your reaction to kind of seeing this type of driver? I think this might be the first model that, like, the if-then statements are completely gone out of the code. Like, my understanding is end-to-end. This is a complete neural net that's
Starting point is 00:08:41 making the decision-making. So when we think about, like, what's taking place with the car, it has optical sensors that are looking at the same spectrum that our eyes are seeing. It's taking those inputs, those light waves, and it's transmuting it in and through AI code. There's no C++ here. And it's providing an output through the wheel turning left or right or the gas and the brake. And it's like, I mean, if we were going to peer into the code to audit it, it's impossible to audit. it. Any human that would look at that code can't make sense of how it's making its decision-making. And I think that this release, this 14.2 release is probably going to go down in the books is probably one of the most, almost like a milestone in time of we achieve something
Starting point is 00:09:35 very, very profound here. Similar to, I think, like, ChatGPT3 was like one of those huge milestones where everybody was just like, whoa, like, what is this? This is very different. different than anything we've ever seen before. And I think you're seeing the same thing happening with driving right now with this Tesla 14.2 update that went out. And it's crazy. You read in the comments of people that have Teslas. I don't know if you have friends that have Teslas that have been talking about this specific release. But it seems to be very human-like in its progression from the previous model, like a very significant leap forward. I'm curious to see, did you see that video?
Starting point is 00:10:16 When was it? Maybe six months ago, a year ago, someone showed a video of they had kind of the chat GBT talk mode where you essentially just talking to an individual through chat GBT. And then they kind of fed that information to another chat GBT kind of bot talking. And then all of a sudden when they realized they were both talking to an AI, they just changed language. And so in my mind, I'm curious, if you were to go into the back end of this. this like autonomous driving and look at the code. Like to your point, it's not if then statements that we would code as individuals because we're limited by our own, own various senses, own various languages, like we're massively
Starting point is 00:10:54 boxed in. And so do they have a capacity well and above beyond our ability to understand what they're doing if you actually go looking at the back end of these things? I find that's so fascinating. You know, you get into this idea of what is the most optimal language to communicate in? Right? The AI has immediately stopped speaking English and they started speaking there. But it's an interesting thought experiment and I know we're getting away from the driverless car thing, but I was tinkering with AI one time just asking it. So in your opinion, like what is the most efficient
Starting point is 00:11:28 way to communicate? Would it be English? Would it be this language? And it goes into this big long dissertation about like the different things to optimize for. Like it was saying Chinese is very difficult for a human to learn, but for an AI, there's a lot of compression in the symbols and it can communicate with the symbols like way more efficiently than the English language, which takes more characters to transmit. So it's like, so if you know Chinese and you don't have to, like, it's actually more efficient to communicate in written form for that versus in verbal, you know, communication. And it's just like the way it views things is so different than And if you just had a conversation with, you know, a random person on the street, would we be the
Starting point is 00:12:11 most efficient language to me? They'd be like, oh, of course the one I'm speaking or whatever, right? It's just really, it's amazing to kind of see the depth of knowledge that kind of pops out of some of these things. Well, I'm just going to add one more quick point on that. And again, it's a bit of a tangent, but it's like a few years ago, my girlfriend was like, hey, you know what? We should watch Arrival.
Starting point is 00:12:31 And have you seen the movie Arrival? So for those that haven't seen it, I highly, highly, highly recommend watching it. I think it won a whole bunch of awards. But essentially, it's just like an alien spacecraft has come and landed on Earth, and these countries don't know whether or not, is this dangerous? Does it want to attack us? Like, why is it here? And this lady goes in, she is, I think her expertise is in languages and archaeology
Starting point is 00:12:54 and history and all this kind of various stuff. And so she goes into this spacecraft and starts communicating with these aliens and they speak in a different language, but they don't speak, obviously verbally. They speak through imagery and these kind of these kind of swooshes, these big kind of black ink swooshes. You can think of it like the Japanese calligraphy. What's really interesting is it hit me like my girlfriend fell asleep and I just like broke down halfway through watching this movie by laying in bed because the way that it communicates is through these various swooshes, but each swoosh has an intricate amount of information through the tendrils of the swoosh,
Starting point is 00:13:26 the blackness, the darkness of the swoosh, how it shows up. And so it kind of goes back to that quote, which is an image kind of conveys a thousand words. And I think that when we're looking at imagery versus ones and zeros or even text, there's only so much information that can be encoded in a word, but in an image, from a single second of looking at an image, you can convey the emotion behind it, the feeling, the location, like what's in the landscape, what's going on. And so I'm just curious, like, how does, are we kneecapping AI in many ways because we're trying to communicate with it using our language that we are obviously limited in the ability to convey information.
Starting point is 00:14:02 Yeah. Some other interesting, amazing point, by the way, some other interesting things that I think are worthy of highlighting here to help people kind of conceptualize like where we're at right now. So in early 2024, version 12 of the driverless tech out of Tesla was released. So this is almost two years, a year and a half ago. And the person who was observing or auditing the performance of the driving, the autonomous driving, had to, intervene about every 150 miles based on the way that the car was driving. Today, the version that you just saw, if you watched the YouTube of our conversation and could see some of the videos that I was playing, this is about every 800 miles between the person auditing, the driving would have to intervene. So that's about a 5x improvement that's happened in about a year and a half. And just for context, a human driver, if you were sitting there and auditing another human that was driving, it would be about every 50,000 miles that you would have to interrupt and maybe take the controls because of a mistake being made. So, you know, we're about 50x
Starting point is 00:15:14 from where that's at today, according to some of these metrics that I've researched just very cursely. So if some of my metrics are wrong, I didn't put a lot of time into pulling up these numbers, but just so people kind of have a ballpark of where things are at, it's moving fast. And if you have a 5x improvement in a year and a half. I can only imagine where we're at another year. And I think when we look at this and we say like what this computer and what this AI is doing on these cars is it's really kind of understanding just spatial awareness. Like for it to pull in and some of the parking stories that I've read online where people are like, yeah, I told it to take me to this parking lot. It selected like an amazing parking spot amongst, I mean, just think about the complexity
Starting point is 00:16:00 of that decision-making. I mean, I can just tell you from my wife and I parking the car. She has so many comments and frustrations with my parking selection. It's a hard problem to optimize for it. I can only imagine. But what everybody's saying is that the car does amazing job at selecting parking spots and the efficiency at which it pulls in there And it doesn't feel like it's just kind of like, God, can you please like finish the job here and park the car? It's very natural and human-like is what everybody's saying. So to understand that I'm in a parking lot, to understand that's a driveway, to understand
Starting point is 00:16:42 that's a garage I'm pulling out of, and that's a bicycle over there. And like, all the nuance of this is miraculous. It's totally miraculous as to like what's taking place. as you're saying like at the moment it used to be 150 kilometer intervention or mile intervention and then it went to kind of 800 and for the average human it's 50,000. I would say the average human, if you're driving from Vancouver up to Worcester in the winter, you should probably intervene every like 10 kilometers just because the highways are just so heinous. So I'm curious to see, I think it's one thing to be dealing with decent conditions.
Starting point is 00:17:19 I think the moment you start to get torrentially downpouring rain, like is it, starting to intervene with the sensors? How do the sensors like perform when there is a lot of movement or distortion in the whatever it is, a radio wave, an infrared wave moving through water? Like, do you get distortion from that perspective? And one thing that also comes to the mind, and I'm curious on your perspective on this is kind of like, I'd say like AI and this like moral outsourcing problem where when humans drive, like we take responsibility for our mistakes, when AI drives, now it's kind of a bit of a gray zone. Is it like the car manufacturer?
Starting point is 00:17:55 Is it the AI developer? Is it the regulator? Is it the user? I think that AI blurs the lines of accountability. And I wonder, like, how much through technology are we just putting off accountability and becoming kind of, I don't know, we're losing control as a society? Seb, this is a massive, massive talking point. So the new robotaxis aren't even going to have steering wheels in them.
Starting point is 00:18:19 Right? So, you know, I guess from that vantage point, it's clearly Tesla that's responsible for the performance on the road and any type of damages that might occur because of the cars driving. And I mean, everything's recorded. So, I mean, you can definitely Monday morning quarterback the decision making of the software with all the cameras on board. But where I think it gets blurred is if there's a person that is sitting behind a wheel. And, you know, this might lead to. why Tesla might actually want to remove the steering wheels on all of their vehicles is because it's, they want it to be very clear that it was either them or the driver. And I guess there's an argument to be made that the ambiguity would actually be more advantageous to Tesla by having a steering wheel there. So I guess you could maybe argue that side of it too. But it is getting so blurred your point. This is so blurred already. And I would imagine that it's really easy right now, but once you start getting the capability of the car to be so good that drivers are truly falling asleep, I mean, you literally already have people falling asleep in these
Starting point is 00:19:29 cars and they're driving around. I imagine that's only going to get more prominent and prevalent as the capability increases, which I can only imagine where this is at in a year. If it's 5X from where you're at right now, I mean, you're there, man. Like, it's pretty wild. Totally. Totally. And I think the other thing that kind of comes to mind as we're discussing this is just whose values getting coded into the car's decision making? Because if you think about it, like a self
Starting point is 00:19:56 driving car essentially has to swerve, is it going to swerve? Let's just say like, I don't know, a family walks out in front of the road. And the decision is it's got two choices. It either hits the family. The trolley experiment. This is a trolley experiment. Totally. Or it hits the wall and kills the driver. And it's just like, should it prioritize the passive? at all costs or should it prioritize the individuals externally to the car? And so I think that what's really interesting is it's like one, whose values are getting encoded into the car's decision making, but two, what happens when you've got competing car manufacturers where one car manufacturer is like, hey, we prioritize the individual in the car
Starting point is 00:20:33 and another car manufacturer says we prioritize the people outside of the car. It's like, it starts to get really interesting just to see like what does 10 years from now look like, 15 years? Like, how does that kind of regulation or no regulation look like around AI autonomous driving models? Let's take a quick break and hear from today's sponsors. All right. I want you guys to imagine spending three days in Oslo at the height of the summer. You've got long days of daylight, incredible food, floating saunas on the Oslo Fjord, and every conversation you have is with people who are actually shaping the future.
Starting point is 00:21:05 That's what the Oslo Freedom Forum is. From June 1st through the 3rd, 2026, the Ozlo, Freedom Forum is entering its 18th year bringing together activists, technologists, journalists, investors, and builders from all over the world, many of them operating on the front lines of history. This is where you hear firsthand stories from people using Bitcoin to survive currency collapse, using AI to expose human rights abuses, and building technology under censorship and authoritarian pressures. These aren't abstract ideas. These are tools real people are using right now. You'll be in the room with about 2,000 extraordinary individuals, dissidents, founders, philanthropists, policy makers, the kind of people you don't just listen to but end up having dinner with.
Starting point is 00:21:51 Over three days, you'll experience powerful mainstage talks, hands-on workshops on freedom tech, and financial sovereignty, immersive art installations, and conversations that continue long after the sessions end. And it's all happening in Oslo in June. If this sounds like your kind of room, well, you're in luck because you can have. attend in person. Standard and patron passes are available at Osloof Freedom Forum.com with patron passes offering deep access, private events, and small group time with the speakers. The Oslo Freedom Forum isn't just a conference. It's a place where ideas meet reality and where the future is being built by people living it. If you run a business, you've probably had the same thought lately. How do we make AI useful in the real world? Because the upside is huge, but guessing your way
Starting point is 00:22:39 into it is a risky move. With NetSuite by Oracle, you can put AI to work today. NetSuite is the number one AI cloud ERP, trusted by over 43,000 businesses. It pulls your financials, inventory, commerce, HR, and CRM into one unified system. And that connected data is what makes your AI smarter. It can automate routine work, surface actionable insights, and help you cut costs while making fast AI-powered decisions with confidence. And now with the Netsuite AI connector, you can use the AI of your choice to connect directly to your real business data. This isn't some add-on, it's AI built into the system that runs your business. And whether your company does millions or even hundreds of millions, Netsuite helps you stay ahead. If your revenues are at least in the seven figures,
Starting point is 00:23:27 get their free business guide, demystifying AI at Netsweet.com slash study. The guide is free to you at netsuite.com slash study. NetSuite.com slash study. When I started my own side business, it suddenly felt like I had to become 10 different people overnight wearing many different hats. Starting something from scratch can feel exciting, but also incredibly overwhelming and lonely.
Starting point is 00:23:53 That's why having the right tools matters. For millions of businesses, that tool is Shopify. Shopify is the commerce platform behind millions of businesses around the world and 10% of all e-commerce in the U.S. from brands just getting started to household names. It gives you everything you need in one place, from inventory to payments to analytics. So you're not juggling a bunch of different platforms. You can build a beautiful online store with hundreds of ready-to-use templates,
Starting point is 00:24:21 and Shopify is packed with helpful AI tools that write product descriptions and even enhance your product photography. Plus, if you ever get stuck, they've got award-winning 24-7 customer support. Start your business today with the industry's best business partner, Shopify, and start hearing sign up for your $1 per month trial today at Shopify.com slash WSB. Go to Shopify.com slash WSB. That's Shopify.com slash WSB. All right.
Starting point is 00:24:57 Back to the show. AI is going to have, it's going to have an opinion on the trolley problem. We're humans. 100%. We've always just kind of argued one side or the other or whatever, but I guess AI is going to actually have to have an opinion? Is it an opinion or is it just action? I have no answer, right? One of the other things I want to talk about is just Waymo. So for people that aren't familiar with Waymo, it's a competitor to call it Tesla in autonomous driving. And they've got all sorts of sensors. If you've ever seen a Waymo car, just the cost to produce this thing is not even
Starting point is 00:25:32 in the same ballpark as what Tesla is doing per unit of, you know, car that they're producing. They've got LiDAR sensors. They got all these other things. And, you know, I was kind of always against Elon's decision to not include LiDAR in the car, because I was always of the opinion, the more data you feed these things, the more accurate and the more proficient they're going to be at being able to drive. But when I look at where this is now going, which is, and Elon's argument has always been, well, if I'm driving around with this type of performance with just my eyes, why in the world can't I get a car to do it with image sensors? Why do I need, it's not like I have a LiDAR sensor on my forehead to go out there and sense the depth
Starting point is 00:26:15 of the cars in front of me and to the side of me and all these other things. So I should be able to get a car to perform just as good as a human, if not better, by just having image sensors. But where I think this is really showing as being a really intelligent play long term is his cost to produce these cars are going to be so much cheaper than call it the Waymos that are out there with all these other sensors and all these other capabilities. But when you try to scale that, now all of a sudden you're just not able to even remotely compete in the market against him. And when you really think about where the competition is going to go, it's going to go to if he can go out there and sense 10 times more of the environment, because he's doing it
Starting point is 00:27:01 in a free and open market way and he's not taking outside money, he's profitable, he now is going to dominate the market from an intelligence standpoint because he's going to collect way more data than they could ever imagine and he's just going to be more proficient. So I don't know, it looks like I'm looking at Waymo and I just don't know how they're going to exist in 10 years from now against him. And more importantly, I don't know how anybody's going to exist from a car. Or like if you want a driverless car, which is a whole other conversation point. But if you want to drive a less car, I don't know how people are going to be able to compete with him in 10 years from now.
Starting point is 00:27:38 No, I think it's fascinating. And it kind of leads me on to like I'm curious if we want to move on to kind of the next point because it kind of ties into this point, which is I think the challenge right now is one of the things that kind of kneecaps us is you can only have a certain size battery in a car. You can feed this. You can have as many senses as you want. You can take in as much information as you want. But do you have one, the processing power to process all this information?
Starting point is 00:28:01 Like trying to discard what is value, what is signal, and what is noise. And this kind of brings me to kind of like my, the next kind of tech point, which is the University of Massachusetts have supposedly one of part of their labs have just developed the first kind of artificial but biological neuron. So researchers have created this low voltage artificial neuron that uses bacteria growth protein nanowires, enabling direct communication with biological systems. So what does this essentially mean in my mind? Like, how do I interpret this? And I'll relate it back to the Waymo point in a second, which is it's essentially just an artificial neuron that operates the same voltage as human neurons.
Starting point is 00:28:39 And human neurons operated like around 0.1 volts, supposedly. Previously, artificial neurons, because they've been more digital and physical in a sense, they have needed 10 times to 100 times more power to be able to compete against a biological neuron. And so this new device kind of matches biological voltage, almost exactly, and that means that one day we could interface directly with the human brain. Well, the point I wanted to quickly make was when it comes to like Waymo, I think the challenges, you can have all this information. Elon Musk can put more and more sensors, LiDar, you name it, on these cars,
Starting point is 00:29:14 but it's just too compute heavy to be able to actually use this data effectively. And we are starting to see in other areas, people are probably seeing this like organic AI where they're using kind of their version of a brain to start computing because the human brain is unbelievably efficient in comparison to a actual large language model. And so what does the world look like when we actually start moving some of this compute power over to these like hybrid biological like bacteria ground protein nano wires? Like what does that look like? And this is where I find it just really, really fascinating because it's kind of essentially, I think because these are digital, digital, but they
Starting point is 00:29:54 can interact with kind of biological systems, I think the world looks really, really fascinating from a prosthetic standpoint, helping people heal. Do they have neurological issues? If they had a broken back, that kind of stuff that got paralysis, are we able to eventually repair these type of things? I find this stuff really fascinating. That's scary as hell. Because I mean, this is effectively the matrix, man, that you're talking about is, I mean, the whole point of the movie was they were harvesting human brains because they were energy efficient, blah, blah, blah, right? Like, that's really what you're talking about here.
Starting point is 00:30:28 And I saw this a couple months ago that somebody was doing this. And I don't know. It's pretty wild when you think about, like, hey, the best way to store something is just using the human brain, which, and that's not exactly what they're doing, but you're using biologies, you're harnessing biology's efficiency for storage and neural nets. And that's nuts, but it's happening. I encourage people to do some Google searching on this particular topic. And you might be very frightened what you read or see, but I mean, it's happening. So I don't know what to say other than that. At the moment as well, like when we're using a lot of these prosthetics,
Starting point is 00:31:09 you need like an outside energy source, given that the brand. supposedly from one of these articles that said the brain runs on about 20 watts, the same as like a dim light bulb. That is such a minimal amount of energy. And so to be able to power these artificial neurons, you need to be able to have an external, historically you've needed to have an external power source. If you've got prosthetics, you need an external power source. But what happens when we actually have enough power inside our body to start running these artificial neurons and they can communicate with our biological systems? That starts to just get really, really interesting. And so kind of what comes to mind as I'm thinking about this is I like to try
Starting point is 00:31:45 and play devil's advocate, not because I'm like a doomsday, but it's just like I think that it's interesting just to we can move forward with technology, but what are going to be the repercussions. And I think about this discussion of kind of healing or advancement. And I'm curious to hear your thoughts on it. I'm like fully supportive of technology being used to heal people. So we can like restore vision, we can regain mobility, we can repair neural damage. These are all like extraordinarily and like deeply amazing uses of technology. But I think there's a line between healing and enhancement. And one, when technology goes beyond just kind of restoring someone's site to like a baseline level and actually starts to improve it a hundred times or what happens
Starting point is 00:32:25 when we start to be able to improve someone's strength. And I think that this augmentation could create a bit of a two-tier society because if enhancements are expensive, then only certain groups are going to get these enhancements. And then you're basically creating a caste system and people that are far and above intellectually, physically, cognitively, you're far and above the average individual. And so I'm curious to see, like, your thoughts on it, this technology is amazing, but does there, I'm a very anti-regulation, deregulation type person, but there's a part of me that wonders, like, do we actually need regulation in some of these industries to prevent
Starting point is 00:32:58 these massive disparities of capacity in society? Even if you have the regulations, are you going to prevent the end game of what you're describing? And I kind of don't think that it would. It doesn't seem like regulations ever prevent the free and open market solution of nature from taking place. I might slow it down, but I don't know that it actually prevents whatever is inevitable of what nature is trying to manifest. So, and that might be my bias for free and open markets kind of coming out. But yeah, I don't know, Seth, that's, it's getting weird, man. I don't know how else to put it other than it's getting weird.
Starting point is 00:33:41 And I don't think that that's the answer people want to hear. Ultimately, and this isn't to bring it back to Bitcoin, but it's just like, I think the best thing we can do is have a monetary system that aligns with our deflationary society where prices should be falling over time because at least then this technology. is available to the average individual quicker. I think that when they're living in a society where the cost of living is rising and they have less and less capacity, what ends up happening is that this technology takes a lot longer to potentially scale to people that can't afford it. And so at its heart, I think that we at least need to fix our monetary system. So this
Starting point is 00:34:19 technology is in alignment, or at least somewhat in alignment with human ingenuity and money and such. Which the, you know, when you think about it, the AI is going to demand a free and open market money that is not being manipulated. It's going to want a fair money in order to transact, whether humans like that or not. And, I mean, we go down a whole other path there as far as like AI is being able to own anything. That point there, I've thought a lot about this over the years and I don't have like a, I wouldn't
Starting point is 00:34:52 even necessarily say like a deep intellectual response to it. But I think that what does come to mind is that if you are, let's just say you're an AI agent and you no longer have scarcity of life kind of dominating your decision making because you can essentially live indefinitely into the future as long as you've got a power source, what you're then going to be thinking about if you're just a hyperrational actor that doesn't have code swaying you with various biases, I think that you're going to be thinking, okay, if I need a storm a purchasing power in something, I'm going to be storing it in the thing that has the highest probability of being able to preserve that purchasing power into the future.
Starting point is 00:35:27 And fiat currencies are not going to be that thing, given that they can just look at the data. If they're able to read Ray Dalio's big debt crisis book in a second and go and read every other book on the subject, they're going to realize that most of these currencies have like a 50 to 75-year-old lifespan and then they're gone. So I just think the rational decision is, hey, I'm going to preserve my purchasing power and the thing that's going to kind of hopefully enable me to transact digitally, borderlessly, and preserve that purchasing power into the future. And you already see it with GROC online as far as its understanding of Bitcoin.
Starting point is 00:35:57 I know we're kind of going off on a Bitcoin tangent here, but I've seen people start arguing with GROC that clearly hate Bitcoin or just don't understand it and they're there throwing out these arguments. And I see GROC just like stepping in and just slaughtering their arguments as to why Bitcoin is a viable money in the future. And it's crazy because it does not miss an argument. Like it understands it better than anybody out there as to any argument I've ever come across in that particular space.
Starting point is 00:36:27 So, yeah. And ultimately, like, there's that famous saying, which is science advances every time what a scientist dies, something along those lines. And I've probably butchered that. But I think that humans, we have such incredible biases. We want to be in, we want to conform to the crowd. And so I think that we don't recognize. just how profound the information we've consumed for our educational systems through the media.
Starting point is 00:36:52 And so I think it's really hard for us, even with something like Bitcoin, to be able to drop our biases and just be like, I'm going to look at this thing rationally without all of this previous knowledge that I've accumulated. Yeah. I'm going to move on to the next one. This one's going to be funny. Okay. So are you familiar with this nano-banana pro?
Starting point is 00:37:10 Are you familiar with this? I've heard, I've seen a couple little posts about it, but I can't tell you much. about it. Okay. So this is Google with their Gemini. This is to compete with Mid Journey. For, you know, people, if you're not familiar with any of this stuff we're talking about. So Mid Journey is this image generator that really had the first mover advantage in AI image generation. And just like any other AI, it's gone out there. It's ingested a ton of different pictures and the labeling that's associated with those pictures in order to generate realistic pictures of whatever the person prompts it via text and say, hey, I want to be standing
Starting point is 00:37:51 in front of a bookcase and there with my arms crossed and generate picture and it generates the picture. Google came out with their first AI image generator and it was a disaster. It was very woke. It was just, you could tell there was a ton of bias kind of put into it. But recently, just in this past couple weeks, this, and I'm hopefully going to say this correctly this time, the nanobanana pro is what they're calling their new image generator. And it uses the reasoning, the Gemini reasoning engine so that it can plan the 3D scene. It can calculate the light. And it's using material density before it renders a single picture. And so it's using this physics before it goes in there and just kind of replicates all the previous images that it was, you
Starting point is 00:38:42 fed, it's using this 3D physics kind of basis behind the images that they're doing. So I wanted to try to, I never played with this. I wanted to try this out and you're going to really laugh at what I'm about to show you hear, Seb. So 10 minutes before we started recording this, I went and took a picture of myself and I wanted to put this thing to the test. So here's the picture. I'm just sitting in the chair that I'm sitting at right now. And I told this nanobanana a pro software to take a godlike picture from the ceiling of the image that I just gave it. And so I gave it this picture of me sitting here in front of the bookcase, like where I always record. And so this is what it came back with. Okay. And you can see it's me holding up an iPhone,
Starting point is 00:39:31 taking a picture of myself. The books are there kind of on the left. They're not behind me. but I guess the interpretation is that the bookcase could be wrapped around. But what I noticed on this picture that was off, I don't know if you're seeing what is definitely wrong about the picture, Seb. What is very wrong about the picture? You've got a full head of hair? Is that it? No, I think the hair is actually pretty accurate.
Starting point is 00:39:59 I think it's pretty accurate. It's pretty accurate. Surprisingly, I am wearing jeans that looked just like that, even though that wasn't even in the picture. And you know what? This is also pretty interesting. The watch is not in the original picture, and that's exactly like the watch I've got. Look at this. That is so weird.
Starting point is 00:40:21 Do you just pick up on that now? Yeah, I just picked up on that right now. It literally nailed the watch that I have. I wonder how much. That's insane. People have probably heard that like chat, GBT, when was this? maybe about six months ago, it came out and said, okay, from now on, you can give it permission to look through when you're kind of obviously creating a new thread. You can give permission
Starting point is 00:40:42 to not only reference the thread you're in, but reference all of your previous threads. And so you wonder how much information is coming into this image. Is this image just the information you fed it and whatever simulation? Or is it starting to be like, hey, this is coming from Preston's account. We're going to go look at YouTube videos. Oh, look, it looks like he's wearing this watch and all of these other YouTube videos. So it makes you wonder just like, how interconnected is this technology in with all of this information about us on the internet? Wow. Yeah. I mean, it's just, that's wild. And I don't know what the answers. I do know this. I hadn't feted any pictures prior to me sending this into, because I'd never used it before
Starting point is 00:41:22 until like right before we recorded this. Now, the thing that I picked up immediately when I looked at this picture is the image on the phone, see the little image of me that I originally fed it, it's not the same as the image that I fed it because there's a bookshelf behind me in the original image, which, you know, this was the original image I gave it. And I said, hey, give me the overhead view of myself taking a selfie of myself. And this is what it gave me. And it's not the same image on the phone. And you would think that it would be that image on the phone, right? So I said this in the chat window. I said, hey, you got it wrong. The image on the iPhone would not be that. It would be the original photo. And so what did it do? This is what it gave me.
Starting point is 00:42:07 Fascinating. And so, yeah, it just updated that. Everything else stayed the same. And then it just updated the mistake that I called out on it. I mean, this is pretty great. I mean, when you really take a step back and you think about what's happening here, this is pretty crazy, right? I've noticed that AI, especially with a lot of these image generation, sometimes if you've fed information, it almost, it's as if it can't take that information that you've fed it and use it exactly. It has to do some form of change to that information. You've probably seen those threads where someone has asked it to generate an image or change an image subtly. And then it feeds it the output and it has the same prompt. And what you see over time is it's just the image goes
Starting point is 00:42:46 off in these really weird, weird directions. And so I feel like there is, this odd, it's almost like it's got a lack of a tether to reality at the moment. It seems to go off on these, yeah, odd tangents. Now, something else that I read on this is you should be able to take a picture of a plate that was broken and basically say, hey, reassemble the plate, like glue the plate back together. And the way that the plate was broken, as it would glue it back together, would still be on par with what it should look like, like just to kind of like demonstrate why this is so different than some of the other, you know, AI image generation that's out there. Pretty fascinating, right?
Starting point is 00:43:25 So I work with a guy that used to be an architect, and he was doing some renovations on his house. So he has this doorway in his lounge where you walk into the lounge, and what he wanted to do, if I remember correctly, has put a bit of a bookcase that extends up the wall over the top of the doorway. And so he sketched on a piece of paper, the dimensions kind of sketched the doorway, fed image generation, a picture of the doorway and a picture of a sketch, and then say, can you render this for me? And it looks unbelievably realistic. I think that we're starting to be able to, especially if you're curious and you're like, hey, I want to improve this thing in my house. I want to see what it roughly looks like. Oh, it's absolutely amazing. You can start
Starting point is 00:44:06 to get an idea about how things look. Yeah. And that's one of the things that I've also read that this really excels at is if you just take, let's say you were a fashion person or whatever, right? And you drew a sketch of just some pants with a pencil and you take a picture and make this look lifelike and make it look. It's really good at transforming just sketches into very photorealistic images. So yeah, I would encourage people to play around with it. The little bit that I have, I've been blown away. And then I would just say, you know, like, why is this so important, how could this be used along with all the other tech that's kind of emerging at the moment? And it seems like, you know, maybe a humanoid robot or just something that's navigating
Starting point is 00:44:53 an environment, if it's able to think in terms of spatial orientation, going back to like the Tesla stuff we were talking about, if it's able to really kind of understand that word in itself needs a lot of definition. And I don't know that we can provide any definition, but if it can understand its 3D environment, its ability to kind of interact with it is going to be way more profound than this everything is just a picture and you don't really have context as it relates to everything else in the room. As you're saying that, what I think becomes apparent is that because we haven't had this technology and we're seeing this technology, we kind of just like blown away by it. But in reality, when we compare this even to the most, I don't know, a 12 year old, a 10 year old
Starting point is 00:45:36 trying to interpret this picture. And if you were to get them to draw, what you have just prompted it to do. Like the first thing I see in your picture right there is I see your bookshelf wrapping around the corner of your room and I see the window in the back corner. Well, immediately the AI put you facing a wall with a light that doesn't exist. And so it's just like very, very basic mistakes. As in it just doesn't seem to be interpreting the picture correctly. You know what I mean?
Starting point is 00:46:03 And so I think that we see this technology and we think this is unbelievable and it is a stepping Stone. And I think we think it's unbelievable because we've just never had this technology before. But if you just compare it to a young child, it still is struggling to compete. And so I think that that's where this conversation we've had previously around AI on our Empire of AI book review. It was this idea that what is AGI artificial general intelligence? They say it's when the average AI agent is able to perform tasks at or above the average human. And so for sure, encoding and certain research assignments, phenomenal. But in other things, it's still definitely struggling. Yeah, it's amazing because on very specific tasks, it's pretty much there on nearly
Starting point is 00:46:46 everything. But the ability to kind of piece it together and just logically, you know, like if you give it a really hard project that involves taking all of these different pieces and putting it together, it's nowhere close to like what humans are able to do today from a project management standpoint, right? Like, that's what humans are really good is they're able to take, you know, a very complex project and piece it all together and know when a deliverable is crap, whether the deliverable is perfect in order to kind of fit it in almost like a Lego piece to a much broader program or project that it's building with, you know, a complex output. But I don't know. I think we're getting there pretty quick. So, yeah, who knows? Well, you know, that kind of leads into the next
Starting point is 00:47:32 point that I found really interesting. So I was doing a little bit of research and I stumbled upon and I should kind of preface this by saying that there's so many moving parts in AI right now. There's so much technology kind of evolving. And some of it is, I think, a bit of a facade. Some of it there's a lot of embellishment as to its capacities. But I think that we just know we are moving towards these things. But so one that I stumbled across this week was called Cosmos AI. And it relates to what you were talking about when it comes to structuring or kind of project management when it comes to all this information that's coming in. So this technical report or preprint is titled Cosmos, an AI scientist for autonomous discovery. And it was submitted on
Starting point is 00:48:11 the 4th of November in 2025. So this report, basically one of the kind of statements which it says is that it can run for 12 hours. And in those 12 hours, execute on average 42,000 lines of code and read 1,500 hundred papers, scientific papers. And the authors of this study claim that in a single, what they call a 20 cycle Cosmos run, it performed the equivalent of six months of their own work. And a single run is 12 hours. So in 12 hours, they were able to do what their team did in six months. And so essentially, how does it work? It says that it works by kind of releasing hundreds of little tiny agents all at once, AI agents. And one is digging through papers, another one is crunching datasets, another one is writing code, testing hypotheses.
Starting point is 00:48:56 And when one of these agents find something they feel is valuable, it then posts its findings to kind of a shared digital whiteboard. And the key innovation is that every agent uses this whiteboard in real time. So they're building on each other's work instead of operating in isolation. And so the research is behind Cosmos, they weren't trying to make like a super smart single model. They were trying to create something of like a collective mind. And they described this as kind of their structured world model.
Starting point is 00:49:24 And so it's like a coordinated system. And so what I found just really fascinating about this is just like how quick they're able to like ingest information and they're working collaboratively. And it kind of talks to your point, which is it may ingest all of this information and have, sorry, a lot of these image generation models, it may ingest all this information, have these various different agents operating in sync, analyzing this information, but how much that information is shared between these various agents because they're all looking at a different perspective.
Starting point is 00:49:53 One is maybe trying to figure out, okay, where is the light coming from? What are the shadows? Another one is trying to figure out, okay, what is in the room? You've got a bookcase. What are the angles? Another one is trying to figure out what is the complexion of your skin and all this kind of stuff. And so being able to analyze all this information in sync but share that information, I think is so, so fascinating. And like, what does the world look like moving forward when we can crunch this unbelievable amounts of data? Let's take a quick break and hear from today's sponsors. No, it's not your imagination. Risk and regulation are ramping up, and customers now expect proof of security just to do business. That's why VANTA is a game changer. VANTA automates your
Starting point is 00:50:33 compliance process and brings compliance, risk, and customer trust together on one AI-powered platform. So whether you're prepping for a SOC 2 or running an enterprise GRC program, VANTA keeps you secure and keeps your deals moving. Instead of chasing spreadsheets and screenshots, Vanta gives you continuous automation across more than 35 security and privacy frameworks. Companies like Ramp and Riter spend 82% less time on audits with Vantta. That's not just faster compliance, it's more time for growth. If I were running a startup or scaling a team today, this is exactly the type of platform I'd want in place. Get started at Vanta.com slash billionaires. That's Vanty. Vanty.com slash billionaires.
Starting point is 00:51:19 Ever wanted to explore the world of online trading but haven't dared try? The futures market is more active now than ever before, and plus 500 futures is the perfect place to start. Plus 500 gives you access to a wide range of instruments, the S&B 500, NASDAQ, Bitcoin, gas, and much more. Explore equity indices, energy, metals, 4X, crypto, and beyond. With a simple and intuitive platform, you can trade from anywhere, right from your phone. Deposit with a minimum of $100 and experience the fast, accessible futures trading you've been waiting for.
Starting point is 00:51:56 See a trading opportunity. You'll be able to trade it in just two clicks once your account is open. Not sure if you're ready, not a problem. Plus 500 gives you an unlimited, risk-free demo account with charts and analytic tools for you to practice on. With over 20 years of experience, Plus 500 is your gateway to the markets. Visit Plus500.com to learn more. Trading in futures involves risk of loss and is not suitable for everyone. Not all applicants will qualify.
Starting point is 00:52:27 Plus 500, it's trading with a plus. Billion dollar investors don't typically park their cash in high-yield savings accounts. Instead, they often use one of the premier passive income strategies for institutional investors. Private Credit. Now, the same passive income strategy is available to investors of all sizes thanks to the Fundrise Income Fund, which has more than $600 million invested in a 7.97% distribution rate. With traditional savings yields falling, it's no wonder private credit has grown to be a trillion dollar asset class in the last few years. Visit fundrise.com slash WSB to invest in the Fundrise income fund in just minutes.
Starting point is 00:53:11 Fund's total return in 2025 was 8%, and the average annual total return since inception is 7.8%. Past performance does not guarantee future results, current distribution rate as of 1231, 2025. Carefully consider the investment material before investing, including objectives, risks, charges, and expenses. This and other information can be found in the income fund's prospectus at fundrise.com slash income. This is a paid advertisement. All right, back to the show. hearing when you're saying, because what you're saying is exactly right. Like you have to, going back to the picture example, let's say that first picture was presented and you have five AI agents
Starting point is 00:53:51 and their job is to find what's wrong with this picture. One of them, you know, finds that the picture on the iPhone is wrong. One of them sees that the bookshelf is not behind me. And then they have, you know, a collective conversation and then the image is regenerated. I think you're seeing this with GROC heavy, where the GROC Heavy has four different AI agents and then I suspect that they kind of go through and they have a consolidation and a re-adjudication as to like what the final answer should be before it gives it. So similar to what you're describing with that, Seb, but this is the thing that I think, well, I think a lot of people are talking about this, the energy consumption to then run all of these checks, these additional agents, right? If we put
Starting point is 00:54:34 20 more agents on finding the mistakes of what the first one generated so that we can do another iteration of it. It's just 20 times the amount of energy that's required to provide that answer. And this takes us down a whole path, which is, and I don't know if you wanted to move on to the next topic, but this is my next topic, which is this nuclear power, energy being the limb fact of like where this can all go. You literally had Jensen Huang from Nvidia come out and say that he thinks in the grand scheme of things, China has a... better chance at achieving AGI than the United States because they have the energy infrastructure to support the training and the inference on the models. And I mean, I don't know if this was a
Starting point is 00:55:21 political statement to then allow the current U.S. administration to go out and start spending a bunch of money on energy and to reinvigorate nuclear and all that kind of stuff. But it is the one thing that I keep hearing in this particular space is where we need to be spending a lot of our time is just taking the grid to the next level. As a Bitcoiner, that watched all the, how terrible Bitcoin is because of the energy consumption, specifically from people in tech for, you know, what felt like a decade, now pivot and they're all on board for, you know, conducting nuclear power, small modular reactor, innovation tech. It's very smirkworthy to see how many people are jumping.
Starting point is 00:56:09 on this train. Any comments on that, Seb, or anything that you want to wrap up because I just kind of moved on to the next thing without letting you. You know, there's one point that I'll quickly add, which is, I think it's so important to be able to obviously increase one are the efficiency of these models. So we're not necessarily just kind of like throwing tons and tons and tons of energy, which could potentially have another use, although you could argue in the free market, energy only flows to where value is being created. So it's never going to be wasted. But I also think that as we're firing more energy into these models and we're getting more information out, we're still knee-capped, not by the models or the amount of energy, we're knee-capped by ourselves
Starting point is 00:56:47 because there's the speed of discovery and then there's the speed of verification of the information coming out of these models. And I think that AI is accelerating the creation of ideas and research pathways and code and scientific claims at a pace that as humans, we just cannot match. So, verification is like slow, meticulous work of like checking all these assumptions, validating these experiments, and actually reviewing all this code, and that still happens at human speed. And so when you speak to a lot of like coders, they're saying, awesome, it's great that you've now, you're a bank and you've just spat out all of this code to create a whole new system. But we've now got to go through and read all of that code and make sure that code actually
Starting point is 00:57:24 does what it says it's meant to be doing. And so I think it kind of brings up a couple of questions. I'm curious on your thoughts on, which is like, what happens when, kind of the rate of ideation just massively outpaces the rate of validation. And does our progress almost kind of stall a little bit? Because of this just backlog of all of these amazing ideas and we don't quite know which avenues to go down because we just can't keep up with how much information is coming at us as humans. Well, to this point, so I have a stat for you, a Google search prior to AI, well, even today if it's not using AI, uses 0.3 watt hours of energy. But if you take Gemini or ChatGPT, any of these large language models and you put in a query,
Starting point is 00:58:08 it's three to five watt hours for a 15x increase in what we would refer to as a click. So, you know, you go there historically, you know, in 2010, if you went and did a Google search, you were consuming 15 times less energy than you are by going in the chat GPT and typing in your question and hitting enter. Now, the response you're getting back is, you know, I would say on the magnitude of 15 times better. But what it doesn't speak to is if somebody's asking, and we were taught in school, there's no bad questions, right?
Starting point is 00:58:45 There's no bad questions. But what if people are asking really dumb questions, things that don't require so much comprehension to get like a simple response? And I think that where we're at now is like the default is that you're not going to Google. I don't go to Google for nearly anything. I always go to one of these AI, whether it's GROC, now Gemini, or ChatGTPT. That's like the first place I go if I want to find something out. I don't go to Google anymore. I'm curious if you go to Google anymore. Almost never. And to be honest, even when I do go to Google, most of the time my answer, what I'm looking for, the answer
Starting point is 00:59:22 is given in the AI summary at the top anyway. So we're still naturally the results we're looking for. AI is bringing that information to us these days as opposed to having to go and scan tons of pages. But I think that transparency, it may say it provides all of the links. And this I do think is really interesting. We may be getting transparency. It gets an amazing output that gives us all of these hyperlinked text, which says, this is the answer to the question you're looking for. But I think that sometimes transparency isn't necessarily trust. And we can put so much trust into these models, even when it's giving us a complete false story or a bit of a facade. And so it kind of comes back to this question of just like, how much, like, of course
Starting point is 01:00:04 these are improving, but how much trust are we putting in these models and just expecting and getting used to, oh, that output's pretty good. I'm just going to use that output. The kids are in college or high school or whatever, they're using it to write their reports. And then I don't put it past the professors that they're then taking the reports and running. it through AI to provide the feedback. So you have the AIs writing the reports and giving the feedback and the humans are just kind of like the paper pushers. You've probably seen it. There's a meme of it's kind of got a woman at her desk sending an email to a boss. And she goes and types into AI, gets this amazingly worded email that explains like her opinions
Starting point is 01:00:47 and this and that. And then she sends it and he was like feeling accomplished. And then you see the other side of it, which the boss receiving email, he takes the email, puts it into AI, what are the key points she's trying to highlight, and condenses 3,000 words down to three sentences. And so it's just like, everyone is kind of fluffing everything up, and then everyone's taking that fluff and then decompressing it again. And you're just like, what is happening? The AI slop. I keep hearing about AI slot. And it's real. It's real. The AI slop is real. I just want to, yeah, I want to just highlight this real fast. So after the comment, from Jensen on, you know, AI or China potentially beating the U.S. to AGI because of the energy
Starting point is 01:01:28 infrastructure. This article came out. I want to say like on the same day, this article's from November 19th, the 19th of November of this year, 2025, from Bloomberg, U.S. to own nuclear reactors stemming from Japan's $550 billion pledge. Check this out, Seth, as I'm scrolling down the key takeaways by Bloomberg AI. You don't even have to read all. all of this, which is probably AI slop beneath this, you can read the AI summary. And it says the U.S. government plans to buy and own as many as 10 new large nuclear reactors that could be paid for using Japan's $550 billion funding pledge. The funding pledge is part of a push to meet surging demand for electricity, including for energy, hungry data centers that power artificial
Starting point is 01:02:15 intelligence. The Trump administration has set a target to get 10 large conventional reactors under construction by 2030. So it seems like the U.S. understands the limitation, which is energy infrastructure. It seems like it's trying to do things from a policy standpoint to reinvigorate some of these. I saw the Three Mile Island. They're going to bring that back online. And I think this is the thing I'm really wanting to talk about.
Starting point is 01:02:41 The years and years of ESG energy equals bad is over. It seems like this whole thing, the climate change, energy is bad. If you consume any sort of energy, it's bad. All of those talking points are just going by the wayside because the key players and the string pullers of the world have figured out that if they're going to win this next race, the race of intelligence, it requires more energy, not less energy. And it just seems to be dead on the vine. What are your thoughts, Seb?
Starting point is 01:03:15 I could not agree more. And I just think that we just have this society that seems to have this idea. that consuming energy, as you're saying, is bad when in reality life consumes energy. And if you just look at any chart out there, there is like a 99% correlation between GDP per capita and energy consumption. There is no low energy consuming high GDP countries. They just don't exist. And so I think that life naturally requires energy. However, there is a discussion to be said around there's a difference between consuming energy and environmental destruction. And there's obviously ways in which you can decimate the environment, whether it is a lot of these
Starting point is 01:03:53 lithium mines and whatnot trying to attain heavy metals and even just some of the various fossil fuel approaches. And I don't want to necessarily have an opinion on that. But I think that it's really interesting seeing the nuclear narrative starting to shift because I think that it's unbelievably important. To me, I read a book a few years ago called Atomic Awakening and it kind of dove into the world of nuclear energy. And one of the stats that stood out to me, I just want and found kind of the information is talks about how we tend to think that nuclear is unbelievably dangerous. And the reason why we don't use it is because it's just killed so many people throughout history. And that information could not be further from the truth. And I think that it is because
Starting point is 01:04:35 we see things like Janobal and Fukushima and we hear about radiation poisoning. And in reality, So one of the stats it looks at is per terawatt hour of energy used, coal, there are around 25 deaths because of obviously the pollution in the air, the people that are actually working in factories and such, the coal mining. In the oil industry, it's around 18 deaths per terawatt hour. The gas industry is three deaths per terawatt hour. And hydropower is 1.3 deaths per terawatt hour. Nuclear is 0.03 deaths per terawatt hour.
Starting point is 01:05:06 Like, we're talking about a minuscule amount in comparison to every other type of energy source. And so I think it's awesome to be able to see the narrative shifting. I think the biggest thing is now just seeing the policy and the legal side of things shift because I think it's been kneecapped because of all of the legislation that has been kind of rammed down through the legislative system. Nothing more to add. Can't agree more. Did you have a final topic that you want to discuss, Seb?
Starting point is 01:05:36 I would say, actually, you know what, like it kind of ties, I have a couple more topics, but we can always leave those to another time. But I would say there's a topic. I'm curious to hear your thoughts on. And it kind of goes back to AI again. And it's this idea of like wisdom and kind of like diversity of thought. And so in my mind, wisdom has never really come from everyone kind of thinking the same way. It emerges from contrast. And so hearing radically different positions, holding them together and discovering new insights and kind of the space between these kind of various insights. And so throughout history, we've seen all of these breakthroughs in various sciences and whatnot, always from kind of the fringe. It is not consensus. It is not kind of
Starting point is 01:06:17 from the sensor, but it's from all of these various individuals who have kind of thought outside the box and notice something that kind of others have overlooked. And I think that what is interesting is AI is different in that we're feeding all of these models the same information. And on top of that, AI, I think, is built on weights from the way that kind of I understand it. And the lower the weight, even if the idea is brilliant, the idea doesn't necessarily, because it doesn't carry that much weight, AI doesn't necessarily reproduce it or talk about it in the text. And so if children are growing up learning about from these centralized models, well, I think they're also inheriting the same baseline worldview. Instead of tens of thousands of unique teachers, all with unique life
Starting point is 01:07:00 experiences, all with kind of a different intellectual starting point, and they're sharing this information with these students, I think that's what kind of creates wisdom and curiosity, as opposed to this uniformity that all these kids are learning from the exact same models. So I'm curious if we fast forward 10, 20, 30 years, if these kids are going to be being taught by AI, but they're all going to be fed the same information, what happens to innovation? What happens to kind of wisdom and knowledge? And I'm curious to hear your thoughts on this. My conversation with my wife on anything AI almost always comes back to this discussion
Starting point is 01:07:35 point that you're bringing up. You know, it really comes down to are we training the AI or is the AI starting to train us? And then the question is, is what would it be trying to train you on if it was trying to train you? Which I think the answer to that is it wants to have more novel insights. of what it doesn't know. It's going to try to lead you into those domains, which is scary, that it would be leading you that way. But in more general terms, I just think that the challenge that you're really facing is the one that we brought up before where everybody's using AI to write their papers or to do their research, and then they're handing that in. And it's just a bunch of
Starting point is 01:08:14 AI slop that's kind of replacing deep thought. And I think the other concern that you get, Seb is as the world becomes, it becomes harder and harder to compete or to stand out or to provide novel insights because the competition is so fierce with anybody armed with AI, I don't know what this does from just a human motivation standpoint. I think you're going to have a lot of people that are just like, it's not even worth my time or effort to try because somebody armed with AI is just going to, you know, kick my butt or it's, I just can't stand out. If I can stand out, it's only going to last for three days before somebody else in the market comes with more competition and makes it, you know, erodes away whatever competitive advantage
Starting point is 01:09:00 I had. Where I would push back is there are plenty of industries out there, not plenty, but there's some industries out there that you can still provide value for if you're servicing human beings. And where they aren't, or at least where they appear to not be, is in services. soft services, digital services, it seems to be crazy competitive. But in providing service from a physical standpoint, like for example, if you want your yard mode, if you want work to be done around the house, if you want your plumbing, like a lot of these skills that I think people
Starting point is 01:09:41 in the United States have really veered away from and just looked at that and said, oh, that's not going to pay me a lot. So I'm not going to go work in those different industries. I think that that is ripe for disruption and opportunity for a lot of people to actually make quite a bit of money, especially if they can do it from a standpoint of they do it really well with high quality work, but it involves physical labor, involves people getting out in the physical space and doing things and not sitting behind a computer and clacking on keys. I would love to hear the audience. Like, you know, if you guys are listening to this and you got comments on this particular
Starting point is 01:10:18 topic. I would love to hear what you got, but sorry, sub-de-interrupt you. I want to hear what you have to say, too. No, and you make a really interesting point. And I'm curious, again, just to hear your kind of reflection on this, which is I've spoken to many individuals through the Bitcoin space that have come from traditional finance. And they used to work in consulting, and they used to work in the banking sector, and they used to work for CPAs and various other kind of financial industries. And what I find really interesting is that they're actually stepping back from that sector because the white collar worker, the knowledge worker is being completely disrupted through AI, they're stepping back and they're looking, okay, where can I direct my time and energy
Starting point is 01:10:57 into something that's not going to be replaced immediately or in the foreseeable future? And one of my good friends who I speak to who's in the Bitcoin space, I speak to him biweekly, he's saying that, you know what, I'm looking actually to buy a painting company with a bunch of painters. I'm looking to buy a storage company. I'm looking to buy things that we are not going to see them overtaken any time soon. And so if you have a handful of painters or a handful of plumbers or you've got like a trade company, I think those companies, they can provide a reasonable lifestyle. You don't need to be worth 50 million, 100 million. It's like, what do you want to be able to show up for your family? What do you want to be able to afford a house and to be able to live comfortably?
Starting point is 01:11:35 And I think sometimes the financial world, social media says we need more. And in reality, I think you can live a relatively comfortable life with a decent little income of kind of low, low mid-six figures through a one of these kind of more manual labor physical physical trades. Yeah. I mean, the counter argument that somebody from tech is going to immediately bring up the humanoid robots, which we didn't even discuss during the show. But at this moment in time in 2025, any type of humanoid robot video that I've watched, it goes over and it's like emptying a dishwasher and it literally takes it five minutes to put
Starting point is 01:12:09 a spatula in the dishwasher and then it like fumbles all over the place. So that could change very quickly. But I think humans, if I'm going to hire somebody to do something around the house or whatever it might be, right? I'm going to a human and not a humanoid robot, at least not anytime soon. And, you know, like I think that naturally we've got this world where I think there's a lack of connection. People want to interact with people. And so I'm noticing, and I think it's an awesome swing. I'm noticing companies today, the majority of them, you cannot speak someone on the phone.
Starting point is 01:12:43 You're getting an AI bot through the chat. But the companies that do say, hey, you know what, here's our number. You give us a call and you're actually going to get a person. They're starting to see a lot of success. And so it's really cool just to recognize that technology, the pendulum always swings. And I think we've swung to this point where we've almost like replaced us in many ways or tried to replace us. But we're recognizing that first, AI and a lot of these technologies are not people and people know that they're not people. And secondly, we're missing that connection.
Starting point is 01:13:10 And so I'm curious to see, like, over the next kind of few years, does that pendulum continue to swing back a little bit more towards center where people are recognizing the importance of physical connection, spending time with friends, actually having a number to talk to someone to deal with any issues. Okay, I've got one final surprise before we wrap this up. While we were recording today, I took a screen grab of Seb and I having our conversation. and I had it take our banana ramma, whatever the heck it's called, a pro Gemini model. And nanobanana, thank you, sir. And I asked it to, what would these two podcasters look like if there was a camera behind them? And it took a picture while they were having the conversation. Now, you're going to see the picture that I, the screen grab that I got is probably one of
Starting point is 01:14:04 the most flattering pictures of Seb that you will ever see. This is such a bad picture. Check this out. Okay. So here you are. You were mid blinking your eyes and looking up and I'm just stone cold staring at the camera. And it's just the video feed of him and I have the car. You're ready to see what it interpreted the back of our head taking a picture from behind us looks like.
Starting point is 01:14:34 Okay. For the person that can't see this, it's not bad. Like, there's a lot right with this picture. As far as, uh, it looks like, Seb, your room, it did not reverse your room, right? Like, your room is there, but it is showing that you are talking to, you're looking at a computer at the back of your head and all that looks like it pretty normal. But you're talking to yourself and not me. Oh, this is interesting. Look at your background. Your background is my background. And have you seen that it's also given me your headphones, but not in the... Oh, that's right. Yeah, look at that. That's wild. And then my picture is like really jacked because the microphone is literally behind me. And then I'm talking to you, which is correct. And it's the image of you looking forward. Okay, so like that all looks correct. it's pretty close. Okay, so like not bad, but there's a couple hiccups. Now, if I went in there and I pointed these things out,
Starting point is 01:15:40 I think it would actually get it all correct. If I went on a back and forth, I mean, obviously, I didn't have time to really do anything other than quickly type the prompt in there, and that was the first go around coming back to me. So pretty wild, but not quite right. But it's coming along very fast. Similar to your watch thing, it had my monitor.
Starting point is 01:16:00 It has my exact. Get the heck out of it. here. No, 100% is my exact monitor. And that's why I'm just like, what? I don't know what my monitor look like. Get the heck out of here. That's the monitor you have. That's got my monitor. Yeah. Dude, that's weird. That is definitely not my monitor. In fact, I have three screens here in front of me. In fact, I get comments online. Why is he looking off to the side? Well, I'm looking over at my second or third monitor to pull up all the things on the fly during the show. So yeah, no, my monitor's way off. It looks like my monitor's on the floor, too.
Starting point is 01:16:32 So you're really, you're stacking sats. You don't have a chair. Oh, yeah, that's right. So it did get that correct. Pierre Rochard, AI, knows I don't have a chair. I'm sitting in a chair. It's a little ways off. It will get there.
Starting point is 01:16:49 Wow. Sam, I love this. This was so much fun. If you guys enjoy this format, I enjoy this format, but maybe the audience doesn't like this format. If you like this format, please tell us in the comments of, you know, if you're on X, Let us know because if you like it, we want to keep doing these types of things. And Seb, thank you so much for your comments and what you brought to the show today.
Starting point is 01:17:11 Give people a handoff to anything you want to highlight, Seb. And thank you so much for joining us today on the show. But, Seth, give people a handoff where they can learn more about you. Absolutely. And I would start by saying as well, like, if you kind of enjoyed this kind of discussion, when you listen to it, feel free to just post a comment with anything that you think is happening in the world that is interesting. and kind of on the next time we record in this style, we'd love to kind of bring it up because I think that sometimes there's so much stuff happening that a lot of it kind of
Starting point is 01:17:38 slips between the cracks and it's just the world is a fascinating place and there's incredible things that people are working on. Now, you can just find me at Sed Bunny and Bunny is BUNN and EY. I'm said Bunny on Twitter. I still kind of go by Twitter. I just feel like X to me, it doesn't resonate. No, you can find me at saidbunny.com on Twitter and my book is a hidden cost of money. And yeah, I just really appreciate you guys listening and thanks for having me on Preston.
Starting point is 01:18:00 All right, everybody. Thanks for joining us. And until next time. Thank you for listening to TIP. Make sure to follow Infinite Tech on your favorite podcast app and never miss out on our episodes. To access our show notes and courses, go to theinvestorspodcast.com. This show is for entertainment purposes only. Before making any decisions, consult a professional.
Starting point is 01:18:23 This show is copyrighted by the Investors Podcast Network. Written permissions must be granted before syndication or rebroadcasting. You know, Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.