We Study Billionaires - The Investor’s Podcast Network - TECH009: Data Centers in Space, AI Education, Haptic Touch Robotics and More w/ Seb Bunney
Episode Date: December 17, 2025This episode explores the intersection of AI with healthcare, space innovation, and education. Preston and Seb discuss personalized genetic analysis, Google's space data centers, haptic touch tech, an...d the future of simulated realities. They also touch on AI bias, regulation, and how evolving tech might reshape society, purpose, and connection. IN THIS EPISODE YOU’LL LEARN: 00:00:00 - Intro 00:05:20 - How genetic data is used to create custom supplement plans 00:07:21 - The role of AI in interpreting genetic information 00:16:21 - Why Google's space-based data centers could revolutionize computing 00:19:55 - Technical challenges of Bitcoin mining in orbit 00:24:17 - The implications of space debris and Kessler Syndrome 00:26:11 - How AI is personalizing education through initiatives like "Learn Your Way" 00:30:35 - Pros and cons of VR and humanoid robots in classrooms 00:38:45 - Ethical concerns around AI bias and centralization of regulation 00:54:37 - Advances in haptic touch technology for VR and robotics 00:50:12 - Philosophical questions about simulations, reality, and technology’s impact on society Disclaimer: Slight discrepancies in the timestamps may occur due to podcast platform differences. BOOKS AND RESOURCES Seb’s book: The Hidden Cost of Money. X Account: Seb Bunney. Related books mentioned in the podcast. Ad-free episodes on our Premium Feed. NEW TO THE SHOW? Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members. Follow our official social media accounts: X (Twitter) | LinkedIn | | Instagram | Facebook | TikTok. Check out our Bitcoin Fundamentals Starter Packs. Browse through all our episodes (complete with transcripts) here. Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool. Enjoy exclusive perks from our favorite Apps and Services. Get smarter about valuing businesses in just a few minutes each week through our newsletter, The Intrinsic Value Newsletter. Learn how to better start, manage, and grow your business with the best business podcasts. SPONSORS Support our free podcast by supporting our sponsors: Simple Mining Human Rights Foundation Unchained HardBlock Linkedin Talent Solutions Onramp Amazon Ads Alexa+ Shopify Vanta Public.com - see the full disclaimer here. Abundant Mines Horizon Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm
Transcript
Discussion (0)
You're listening to TIP.
Hey, everyone, welcome to this Wednesday's release of Infinite Tech.
Today, Seb Bunny and I are back to cover all the most interesting and crazy things happening
on the tech frontier.
On this show, we dig into how AI is transforming personalized healthcare from genetic analysis
to real-time supplement protocols and what that means for privacy and trust.
We also break down Google and SpaceX's push towards space-based data centers and the 10X drop
and launch costs that need to take place before that future is real. From there, we shift to education,
personalized AI learning, VR classrooms, and why the traditional model is struggling to keep up.
And we wrap with the big question, AI bias, regulations, and how these systems decide what is
signal versus noise. This is surely an episode you won't want to miss. So without further ado,
let's jump into the show. You're listening to Infinite Tech by the Investors Podcast Network,
hosted by Preston Pish.
We explore Bitcoin, AI, robotics, longevity, and other exponential technologies through a lens of abundance and sound money.
Join us as we connect the breakthroughs shaping the next decade and beyond, empowering you to harness the future today.
And now, here's your host, Preston Pish.
Hey, everyone, welcome to the show. I am back here with the one and only Seb Bunny.
And we've got a whole array of tech topics and exciting things that are in the news and happening.
and fascinating, fascinating stuff.
We're preparing for these before we hit record,
Seth and I were both just like,
this is really fun to go through
all these exciting things that are happening in the world right now.
So, Seth, excited to have you back.
Are you ready to dive into this?
Oh, man, definitely ready.
And to that point, I think it's really fascinating
when you look at something through a different lens,
the lens being, hey, I'm about to talk about this thing.
I need to understand it a little more than just this very, very superficial level.
And so, yeah, yeah, the world is moving so quick.
when you're trying to keep up of technology.
I want to emphasize, if you're listening to the show and you come across something
on X or anywhere on the internet that is just fascinating from a tech standpoint, we've had
a couple people share stuff with us on Twitter.
One of the things I believe we're going to be using today on the show, so share that
with us, point it out, shine a flashlight on it for us so we can bring it on the show.
And we'll mention you on the show if you're one of the people that bring it to us.
So let's start off, Seb.
You said that you had something with Gary Breka that you want to.
the cover to start off.
Absolutely.
So, this isn't necessarily like the newest technology that we're seeing advancing in
this week, last week.
This is something that maybe over the last few years I've been digging into.
And it's this idea of kind of like personalized health.
So for people that don't know who Gary Brecker is, he kind of founded something that he
calls kind of the ultimate human.
I think it's a company.
They also have a podcast.
And if I give a little bit of backstory, he used to be a life insurance guy.
He used to look at these individuals that are applying for life insurance.
life insurance company had to figure out, okay, based on this person's health, how much
are we going to charge for these life insurance premiums? And so he would be looking at basically
someone from birth through to present day to try and determine from like a data perspective,
how healthy is this person, how much life do they have left? And what he noticed increasingly is
many times these individuals would be going to separate doctors that have these health issues,
but no single person is looking at them holistically and there would be conflicted issues.
So, for instance, one lady in particular, I think he talks about in one of his podcasts,
was he was looking at this case and this lady was seeing two doctors.
Both of them were giving her a different prescription.
And those prescriptions actually interact with one another and could be deadly.
And he, given that he was working for the life insurance company, wasn't legally allowed to step in and voice this to her.
And so he just had to see this play out.
And he was just like, I cannot continue doing this.
I want to be able to help people.
I want to be able to support people along in their journey.
and I recognize that health is something that we need to look at holistically.
We can't look at it interventionistically, which is what a lot of our healthcare system does.
So, in essence, the reason why I find it really fascinating is that he looks at what are called
our genetics and our methylation pathways.
And so for people that aren't familiar with kind of what these are, methylation, the way
I interpret it, and I could be completely wrong here, the way I interpret it is that our body has
these things called methylation pathways, and it's how we take nutrients and then
use that nutrients to be able to form various, or have our bodily systems run, be able to
extract and detoxify, inflammation control, hormone processing, DNA repair, all of these various
processes in the body. And the way I kind of think about it is like, imagine if you're trying
to feed a car crude oil, it's not going to be able to take that crude oil and use the nutrients
directly. You need to have it kind of converted into gasoline. Well, it's the same thing when we
eat food and vegetables, we eat those food, comes into the body. We can't use it directly.
We need to go through the methylation pathway to convert it into a source that's ready for us
to use. So anyway, long story short, what this guy Gary Brecker basically does is he does like a genetic
test. He looks at usually five specific genes. These are kind of like the MTHFR gene, the MTR gene,
the MTR gene, the MTRR gene, the COMT gene and the CBS gene. Now, you don't necessarily
need to know what any of these things do, and I'm not going to go dive into them.
but I definitely recommend people going and digging into it.
But basically these are our methylation pathways which help us detox, energy metabolism,
inflammation control.
And so when he actually looks at these, he's able to determine what supplements we need
on a unique personal level, as opposed to us just blindly throwing darts at a board,
going and buying, oh, I need vitamin D, oh, I need this, oh, I need that.
And so I think what's so cool about what is happening in the world today is we're starting
to get personalized healthcare.
We're starting to be able to have a personalized supplement protocol as opposed to us trying
to listen to our intuition and feel, oh, I'm taking this thing and I think I'm feeling a little
better.
So I think that for me, I've noticed I've been listening to Gary Brecker for a few years,
and it has profoundly changed my health.
I profoundly changed my health.
And I feel really lucky once I started kind of incorporating some of this stuff.
I actually haven't had a cold in four to five years.
I got one this weekend.
And it's the first one that I've had in four to five years.
Because you were going to be talking about it.
Yeah.
100%.
It's got to be talking about it.
But I just find it really, really fascinating.
And I know one of the reasons why I bring this up is because I know, Preston, you and I talk about this a lot when we're in person, just how we show up, how we think about supplementation, how do we support our bodies in the best way possible.
And so I just wanted to kind of bring up technology from a health standpoint is advancing so rapidly that we can start to have more personalized care.
Because I think up until now, we look at the body as.
this kind of like singular thing that it's just like, oh, people need this, this, and this.
And it's just like, well, that's not necessarily true.
Some people need more of this and less of this.
Some people may have a deficiency in this.
And so I think having a personalized healthcare approach is going to change the way I think
we view health.
Yeah.
So he's a major voice in the health, longevity space.
He's the guy that's wearing like the weighted vest around all the time, if I'm correct.
Is that right, Seb?
Yeah.
Yeah.
So the first thing that comes to mind, as you mentioned all this, is just,
tracking all of your DNA data and ingesting that into some database and then running AI on it
in order to get insights that we've never really been able to understand before AI and
its ability to pattern recognize so much complexity. So that's the exciting part, obviously,
from a technology standpoint. I know we have a lot of privacy folks that listen to the show
through the Bitcoin community. And one of the things that immediately comes up when you start down
this path of taking your raw code, your genetic code, your unique genetic code, and putting it
into these models and running it on somebody else's servers, people have concerns as to how
that could maybe be used also in a very nefarious way and could be captured by, I know this 23
and me was one that was doing DNA testing and keeping track of all of those records.
and then they were procured by different parties and like what happens in the business of collecting
biometric data.
And I think it's an important counter talking point to some of this stuff because like you,
I am super excited and super fascinated by like what this could mean, what it could mean for adding
years to your life because now you're finally getting custom treatment.
I mean, I think when you look at what you're bringing up, Seb, which is this custom DNA,
like audit and then treatment based on that is where all of medicine is going in very short order
in the coming five to 10 years.
And I think it has the potential to lead to some like serious longevity results.
I just, I don't know how you possibly go about it in a way that protects the privacy of the
people or that, you know, encrypts the data and you know that the person's data is protected
or whatever, right?
Like it gets to be somewhat concerning and a lot of people don't want to talk about that
side of it.
But I think it's an important additional note.
I'm curious if you would agree or if you just...
Oh, I couldn't agree more.
And, you know, to share some of my naivety on privacy previously, when 23Mee was hacked,
I was lucky enough, one, to download my data before 23 of me was hacked.
I used 23 years ago.
Like 2018, maybe 2019, I think I used 23 and me purely from the perspective of,
oh, man, I wonder if there's any answer that, like, really relatives of mine in the area.
and reach out to them. And I found it interesting, but ultimately it didn't really give me
much information. It was a little more broad. And so I would say that what I found really
fascinating, though, is before I went down the privacy route, I took all of the downloaded
data, put it into chat GBT. I probably shouldn't have, and now I regret doing this. But I put
it into chat GBT, and then I started interrogating my own genetic information. And that was really,
really fascinating. What did you learn? What did you learn? I was able to ask it, like,
Like, we've got an unmeaning, I can't remember off the top of my head.
It was something like 14.
It was a word document and it was 14,000 pages, if I remember correctly.
It basically just completely jammed my computer.
My computer couldn't process this much information.
But once I put it in to chat, GPT and started interrogating the data,
I was able to say, hey, at the moment, Gary Brecker,
I basically made a Gary Brecker bot and I said,
I want you to be able to look through my genetics from the perspective of Gary Brecker.
I want you to go and try and find the MTHFR gene, the MTR gene, the MTRR gene, go and find and see if there's a mutation.
Because one of the things he talks about is take the MTHR gene.
If you have a mutation on this MTHR gene, then you can take, if I remember correctly, it's one of the B complex like B12 vitamins,
and all of a sudden it can massively improve your methylation pathways so you're able to process nutrients.
So I went through, looked, found out I did have this mutation, which I think a large portion of the population.
do. And just by taking B12, I noticed huge difference in my health. Huge difference. Like,
so you're so much, one of the biggest differences. You are so much further down this path than me.
I'm really interested in this stuff, but dude, you are way down the path. That is fascinating.
And so it found when you ran into the AI, it found that for you. Absolutely. Wow.
And I think that you're able to, and I should preface this by, there's probably doctors listening to
this and just being like, you're probably interpreting this information wrong. And I think that
I'm looking at it from a naive perspective.
They don't know either.
They don't know either.
I found it really interesting that when you have this information, I went in chat
to BT in.
First of, I removed all my personal information from it.
So it didn't know that it was necessarily me.
I was saying, hey, I'm looking at this data for this person.
Are you able to let me know, is this gene mutated?
What are you able to grok from this gene and such?
And so I started interrogating the information.
It can also, there's a lot of other genes that give us insight into do we have a
prevalence of certain cancers, certain other issues, and we can go and interrogate our own genetics.
So I think that's really, really fascinating. What would be nice to see in the future is more
privacy-focused AI models where people can go and interrogate their own information.
I'm laughing and I can't get the smile off my face because I'm thinking, this doctor's probably
like pulling out a notepad and asking, you know, to take notes on what you're doing in order
to like go and do something similar. That's fascinating, dude. Kudos to you. I'm concerned about,
you know, ingesting the data into chat GPT, but we'll put that aside. We'll put that over here.
By the way, great business idea there at the end.
Totally.
Right.
You know what?
Like, I've seen a few people go into their doctor's office.
They go and ask them, hey, I've been having these symptoms or, hey, I've been having this
issue.
And up until what, April or whatever it was of 2023 when chat GPT came out, the doctor was
giving you their interpretation.
Yeah.
Now, I've heard of countless individuals walking to their doctor's office.
They ask them a question.
the doctor's like, oh yeah, give me one second.
Types into chat GPT, chat GPT gives it an answer and then it feeds it back to the person.
And so I think a lot of the doctors now are starting to use chat GPT because the chat
GPT is able to analyze just such a wider array of data and a doctor has a very specific
narrow view, a field view.
Yeah.
Yeah, the AI is training us at this point.
Man, I was, as a funny side note, I was watching the Jerry Seinfeld standup comedy
routine on Netflix. And he went on this bit where he was like, you think that you're taking the
cell phone around and all the, no, he said, the cell phone's taking you around. That cell phone is
dictating where it's sending you. It's just like this really funny bit. But anyway, I think that
when you start going down this path of like the AIs and like all these human interactions are just
relying on the hope that what it's feeding it because you need a fast answer. That's really kind of the
the crux of the issue here is you don't have all day to go find 100 different resources to prove
it wrong. You just kind of need a quick answer and it doesn't have to be perfect. It just has to be
kind of good enough. And like as the whole world continues to like push that easy button,
like we're creating data, but it's data that isn't necessarily human generated. I mean,
you see these numbers coming out of Google and others and the amount of code that's being
generated on these platforms and what, 70, 80, 90 percent of the code now is coming.
from AI and not even human. So it is getting weird, dude.
And you know, like, as you're saying that, it makes me think about when I wrote the
hidden cost of money, my book, I feel really lucky that I wrote it pre AI.
And I wrote it from the perspective of I looked at my note taking app and I had hundreds of
books that I've taken notes on. So when I started to write it, I'd already ingested all
of this information and then I started writing the book. But now it wouldn't surprise me if
we were to have a look and I haven't done this, but if we were to have a look at the amount
of books written and released every single year, pre-AI versus post-AI, we see this hockey stick
where people are releasing these books. But it doesn't necessarily mean that the information
in these books has validity or has been deeply ingested and thought about. Because I can go to
AI and say, hey, you know what? I want to write a new book on this subject. Can you please
give me the 12 chapters and the key points? Now can you please write these out? And can you provide
sources, but I never went and read all of these sources. And most people don't go and read the
sources from these outputs from AI. And so I think one of the challenges is like we're putting
increasing amounts of trust into this thing. And we're not actually looking at the validity
of the information being produced. AI slop. Let's take a quick break and hear from today's sponsors.
All right. I want you guys to imagine spending three days in Oslo at the height of the summer.
You've got long days of daylight, incredible food, floating saunas on the Oslo,
Yord, and every conversation you have is with people who are actually shaping the future.
That's what the Oslo Freedom Forum is. From June 1st through the 3rd, 2026, the Oslo Freedom
Forum is entering its 18th year, bringing together activists, technologists, journalists,
investors, and builders from all over the world, many of them operating on the front lines
of history. This is where you hear firsthand stories from people using Bitcoin to survive
currency collapse, using AI to expose human rights abuses.
and building technology under censorship and authoritarian pressures.
These aren't abstract ideas.
These are tools real people are using right now.
You'll be in the room with about 2,000 extraordinary individuals, dissidents, founders, philanthropists, policy makers,
the kind of people you don't just listen to but end up having dinner with.
Over three days, you'll experience powerful mainstage talks, hands-on workshops on freedom tech,
and financial sovereignty, immersive art installations, and conversations,
that continue long after the sessions end.
And it's all happening in Oslo in June.
If this sounds like your kind of room,
well, you're in luck because you can attend in person.
Standard and patron passes are available at Osloof Freedom Forum.com
with patron passes offering deep access,
private events and small group time with the speakers.
The Oslo Freedom Forum isn't just a conference.
It's a place where ideas meet reality
and where the future is being built by people living it.
If you run a business, you've probably had the same thought lately.
How do we make AI useful in the real world?
Because the upside is huge, but guessing your way into it is a risky move.
With NetSuite by Oracle, you can put AI to work today.
NetSuite is the number one AI cloud ERP, trusted by over 43,000 businesses.
It pulls your financials, inventory, commerce, HR, and CRM into one unified system.
And that connected data is what makes your AI.
smarter. It can automate routine work, surface actionable insights, and help you cut costs while
making fast AI-powered decisions with confidence. And now with the NetSuite AI connector,
you can use the AI of your choice to connect directly to your real business data. This isn't
some add-on, it's AI built into the system that runs your business. And whether your company
does millions or even hundreds of millions, NetSuite helps you stay ahead. If your revenues are
at least in the seven figures, get their free business guide, demystifying AI, at netsuite.com
slash study. The guide is free to you at net suite.com slash study. NetSuite.com slash study.
When I started my own side business, it suddenly felt like I had to become 10 different people
overnight wearing many different hats. Starting something from scratch can feel exciting,
but also incredibly overwhelming and lonely. That's why having the right tools matter.
For millions of businesses, that tool is Shopify. Shopify is the commerce platform behind
millions of businesses around the world and 10% of all e-commerce in the U.S. from brands just
getting started to household names. It gives you everything you need in one place, from inventory
to payments to analytics. So you're not juggling a bunch of different platforms. You can build
a beautiful online store with hundreds of ready-to-use templates, and Shopify is packed with
helpful AI tools that write product descriptions and even enhance your product photography.
Plus, if you ever get stuck, they've got award-winning 24-7 customer support.
Start your business today with the industry's best business partner, Shopify, and start hearing
sign up for your $1 per month trial today at Shopify.com slash WSB.
Go to Shopify.com slash WSB.
That's Shopify.
W.S.B.
All right. Back to the show.
Let's go to the next topic. Okay. So, and you didn't have anything else. So you good on the first.
Okay. Let's go to this next topic. So I'm going to play a clip. And this is, this is interesting
stuff. This is, this is out there. That's probably why I'm playing it. It's fun. Okay. Here we go.
I mean, over time, at Google, we're always proud of taking moonshots. You mentioned way more
earlier, you know, that's been over a decade in the making.
We're working on quantum computing.
In that spirit, one of our moonshots is to how do we one day have data centers in space
so that we can better harness the energy from the sun?
You know, that is 100 trillion times more energy than what we produce in all of Earth today.
So we want to put these data centers in space closer to the sun.
And I think we are taking our first step in 27.
We'll send tiny racks of machines and have them in satellites.
test them out and then start scaling from there.
But there's no doubt to me that a decade or so away will be viewing it as a more normal
way to build data centers.
Okay.
So Elon Musk retweeted this.
This is the CEO of Google that you heard talking.
And Elon Musk retweeted that video and with just text that says, interesting as the SpaceX,
you know, founder and operator.
Okay.
So what in the world is all this about?
So I have a confession.
So I was out in Lugano, Switzerland, for this Plan B conference.
And we did this like shark tank thing where we were like hearing different pitches.
And I was fortunate enough to sit on one of the panels.
And a gentleman came up and he presented Bitcoin mining in space.
And I was just kind of like right off the bat immediately.
I was like, this is just such a bad idea.
Like I just couldn't understand why anybody was seriously pitching this.
Because it was the first time I'd heard of this kind of idea.
And when I was going through the slides, one of the things that really stuck out to me was
the cost that to make this even viable, the cost for space transport to get the hardware
just into space had to drop 10x.
So if it's $1,000 to put whatever up there, you got to drop it down to 100 before this
would even be viable to even begin doing this for real.
And they were pitching us for investment in this company that was trying to do a bit with
Bitcoin miners, not GPUs, which is what was being, or TPUs, the Google ones being sent up
in the space.
And so in preparation for this, after I watched that clip, I went and I remembered that 10x number
from the Lugano pitch.
And so I put it into AI and I was like, hey, what would the cost have to be a drop for Google
to really kind of execute on this?
And this is called Project Suncatcher, is what this is called at Google.
And sure enough, the numbers came out that it needs a 10x drop, even for the TPUs that they're trying to put in the space.
So he's calling it a moonshot.
I would agree, this is definitely a moonshot.
This doesn't seem like this is like right around the corner.
So they're doing this test run.
Let me just read through my notes here for people so they get it.
Google unveiled Project Suncatcher in early November 2025 with plans to launch two prototype satellites by early 2027.
So we're basically a year, year and a half out to test AI hardware.
in orbit partnering with planet labs for the initial mission. They're saying that here's another
stat. It's eight times more efficient than on Earth for them to harness the sun out in orbit
than to be doing it down here on the cross of the Earth. Initial thoughts, Seb, what do you
think of some of this? It's interesting because you sent me a text just with a little snippet of
what you were going to talk about. And so I was thinking about a little more. And the first thing that
came to mind was also my understanding of Bitcoin mining and why Bitcoin mining in space, there may be
issues with it around latency. So depending on where you place this data center, this Bitcoin
miner, from my understanding, like when you're using, say, fiber optics and stuff like that,
we are capped at obviously the speed of light in terms of moving data. And so low Earth orbit,
it takes like two to 10 milliseconds to get information like back to Earth. Geostationary
orbit. I'm not even sure what this is. It's like 240 milliseconds. So geospatial, yeah, geospatial is that
the satellite will stay over the same spot of Earth. So as Earth is rotating, that it will stay
right over that same spot the whole time. So you have to go out at a certain radius in order to
get that. And geosynchronous orbit has like, there's a very small band in order to make sure that
it stays synced with the Earth. So it's a very popular distance from the Earth and very cluttered
distance from the Earth.
Wow.
Yeah.
So that says 240 milliseconds.
And then you've got the moon, which is two and a half seconds.
And if you're out in Mars, it's like five to 20 minutes.
And so the issue is, like from a Bitcoin mining perspective, if you mine a block in space,
by the time you actually propagate that block or push that block to the blockchain, someone
on Earth may have already found a block and everyone builds on the, obviously, the newest
block.
And so you've kind of, you've got a disadvantage already just by being in space.
And so I was thinking about it like, well, how does this relate to being a data center in space?
And I think that it probably works for certain types of information, but it doesn't work for other types of information.
So anything that is dealing with real-time interactions, millisecond responses like high-frequency trading, multiplayer gaming, blockchain mining, I don't think we'd be using that for the space.
But I think that anything that is dealing with kind of some of these bigger ideas of like AI model training, large scale simulations and like batch processing huge amounts of information, I think it could be profound.
That's kind of what came up as you sent that over to me in a text.
So I interviewed an astronaut, oh my goodness, a bunch of years ago, Tim Copra.
And one of the interesting things that Tim told me, I don't know if he told me this on the show or told me this privately.
him and I attended a Berkshire Hathaway shareholders meeting many years ago, and he's told me a bunch of
stories throughout the years. One of the things I remembered that stuck in my head is when they
were working on the International Space Station, they would go out. He did a spacewalk, and he said that
when you went out and did a spacewalk and came back in, and you're in the chamber taking off all the
gear, and you have a hammer, you have all your tools, like all those things. You had to be very
careful that you didn't bump into, call it the hammer, when you came in and the temperature,
I forget what the threshold of the temperatures are. It's hundreds and hundreds of degrees in
both directions of hot and cold. And the temperature is changing every 45 minutes because that's how
fast you're going around the planet, at least at that distance for the ISS. They might be
45 minutes in the sun, 45 minutes on the dark side of the earth, and then 45 minutes in the
sun again and the temperature of the tools and all of your equipment is swinging by hundreds of
degrees in both directions every 45 minutes. And so my question when I was in Lugano was,
I remembered this from Tim and I'm thinking from a reliability standpoint, the hardware. Could
you imagine just that hardware cycling through those temperature changes every 45 minutes?
And like what that would do to the hardware from a reliability standpoint, I would think would be
disastrous. So I asked this guy that question, and he said that there's orbits that you can put
the satellites in that will keep it more in the sun than it's cycling every 45 minutes on,
45 minutes off. So that was his answer to me during the thing. But I guess for me, I'm also
looking at it just from a reliability standpoint. Maybe it's not as harsh. Maybe this is a better
environment. I don't know. It's a fascinating discussion. But the point that I could never get over was
the reliance on the price reduction going down by 10x just to get it into orbit.
And then it's got to work and the reliability and all these other things that are completely
secondary to this massive hurdle.
And I mean, he's calling it a moonshot.
I think it's really fascinating that they're doing a test run on this.
But as far as the actual viability and like whether it's actually going to happen, I just,
I don't know.
It seems like it's just so out there.
It's just saying that.
I remember reading something years ago.
And I think, I was just looking up on Google.
I think it's called Kessler syndrome, and it's this idea that the more stuff we send into space,
like we obviously already see on Earth, like the amount of solar farms, all you need is a
giant hailstorm and you've just decimated like millions of dollars of solar base,
solar equipment.
What happens in space when an asteroid belt kind of comes, I don't know, flying through
and just kind of like decimates a whole bunch of this material?
And then you've got all of this space junk, like flying around at speeds in excess of
tens of thousands of kilometers an hour.
And they're just nailing into all of these satellites.
Like, is there a point whereby sending all of this stuff up into space, we're impeding
our ability in the future to be able to go further a field and such?
Yeah.
It's evidently, it must not be too much of a concern because I don't think that they would
be going through any of these hoops if they didn't think that it was a very viable path,
if they could overcome some of those hurdles like the 10x reduction in the space cost for launch.
Let's go on to the next one.
And if you are a person who's tracking this and you want to share some information, I would love to learn more.
I find this whole thing, this whole idea, just super fascinating.
Okay.
You wanted to talk about custom learning.
Is that right, Seb?
Yeah.
Okay.
Let's do this one.
Yeah.
So long story short, someone ended up posting a lady called Steph, S-T-E-H-U-T, and she was asking about, like, what does AI do for the world of learning?
And this one here I find really, really fascinating.
So I was kind of digging around and I was just like, huh, I wonder where is AI in the world of learning right now?
And even from my own personal learning, being able to have this AI bot, feed me information about my curiosities has helped me understand the world from such a different lens.
So anyway, as I kind of did a little bit of digging, it turns out that I think it was back in September, Google released something called Google's Learn Your Way.
Now, in essence, this is basically like a living dynamic tutor that's tailored to each individual's kind of pace, their interests, their background. And so it can take and ingest these like static one size fits all textbooks. And it's able to take that material and convert it into basically depending on your grade level, your interests, your learning preferences. It's able to create mind maps, audio lessons, narrated slides, interactive quizzes. And so the way that I'm seeing this is this could profoundly change our education.
systems. At the moment, when we think about, like, what is the school system? When you go all the way
back, we've come from like a bit of a Victorian era where we were trying to create a labor workforce.
We want people to do very specific tasks. And when the bell rings, you move on to the next task.
And it didn't really incite curiosity. And so what's really cool about this is, I think as AI is taking
over a lot of kind of the knowledge worker space, as robotics in the future is going to take over
potentially some of the manual labor space. I think where humans thrive is in the,
more of the creative space. So I think some of these like AI tutors are going to profoundly
change the way kids are able to be curious because imagine being in a classroom, but essentially
having a one-on-one teacher at all times being able to support you. And so if you're getting a
question, I don't know, a math question or a physics question or a biology question, and then
they're able to phrase it in like in line with your interests, that is hugely going to increase
your ability to learn. And so already in like very early tests, it looks like students
that are increasing their recall rates by like 11% plus just on recall tests. And to me, I was like,
well, that's quite low. I was expecting it to be like hundreds of percent. But I also believe
that that is only going to increase as this technology becomes far more efficient. And
kids are able to communicate in a way that gets information that is in alignment with their level
and their understanding and such. But I'm curious to hear your thoughts on it.
Let's take a quick break and hear from today's sponsors.
No, it's not your imagination. Risk and regulation are ramping up, and customers now expect
proof of security just to do business. That's why VANTA is a game changer. Vanta automates
your compliance process and brings compliance, risk, and customer trust together on one AI-powered
platform. So whether you're prepping for a SOC 2 or running an enterprise GRC program, VANTA
keeps you secure and keeps your deals moving. Instead of chasing spreadsheets and screenshots,
VANTA gives you continuous automation across more than 35 security and privacy frameworks.
Companies like Ramp and Riders spend 82% less time on audits with Vanta.
That's not just faster compliance, it's more time for growth.
If I were running a startup or scaling a team today, this is exactly the type of platform I'd want in place.
Get started at Vanta.com slash billionaires.
That's Vanta.com slash billionaires.
Ever wanted to explore the world of online trading, but haven't dared try?
The futures market is more active now than ever before, and Plus 500 futures is the perfect
place to start.
Plus 500 gives you access to a wide range of instruments, the S&B 500, NASDAQ, Bitcoin, gas,
and much more.
Explore equity indices, energy, metals, 4X, crypto, and beyond.
With a simple and intuitive platform, you can trade from anywhere, right,
from your phone. Deposit with a minimum of $100 and experience the fast, accessible futures
trading you've been waiting for. See a trading opportunity, you'll be able to trade it in just
two clicks once your account is open. Not sure if you're ready, not a problem. Plus 500 gives you
an unlimited risk-free demo account with charts and analytic tools for you to practice on.
With over 20 years of experience, Plus 500 is your gateway to the markets. Visit Plus 500,
to learn more. Trading in futures involves risk of loss and is not suitable for everyone. Not all
applicants will qualify. Plus 500, it's trading with a plus. Billion dollar investors don't typically
park their cash in high-yield savings accounts. Instead, they often use one of the premier passive
income strategies for institutional investors, private credit. Now, the same passive income strategy
is available to investors of all sizes thanks to the Fundrise income fund, which has more than
$600 million invested in a 7.97% distribution rate. With traditional savings yields falling,
it's no wonder private credit has grown to be a trillion dollar asset class in the last
few years. Visit fundrise.com slash WSB to invest in the Fundrise income fund in just minutes.
The fund's total return in 2025 was 8%, and the average annual total return since inception
is 7.8%. Past performance does not guarantee future results, current distribution rate as of 1231,
2025. Carefully consider the investment material before investing, including objectives, risks, charges,
and expenses. This and other information can be found in the income funds prospectus at
fundrise.com slash income. This is a paid advertisement.
All right. Back to the show.
I think the big breakthrough on all of this is going to be just the AI's ability to sense
how the child learns.
Just look at your kids, anybody that's got kids, you know, between one and the other,
they learn very differently.
Some have to go through examples.
Some have to, you know, everybody has a different way of learning.
I'll give you an example.
I love audiobooks.
I prefer an audiobook over the physical car.
Actually, I prefer reading to me as I'm flipping the pages ultimately.
but if I had to choose one over the other, I would actually prefer the audio version because
I just kind of learn that way a little bit better.
And it's different for everybody.
And so just as an example, the AI is going to learn these different techniques that people
have or that's most optimal for them, the best way to frame it.
As a funny example, when I was a student at the Military Academy at West Point, we would
always have these tests that were framed from, like if you were in a math class, it was
like, you have an artillery round and you're going to shoot it at whatever. And it was like always
framed for some type of military example. And we would always roll our eyes, be like, oh, my God, can you
just give us a normal question and not some military type question? But I use that as an example to frame
the framing of the thing. Let's say your kid is, he loves football or, you know, your daughter
loves dance or whatever. Like, the examples could always be framed in a way that is exactly what
they want to hear and how they want to think about it, right? So I think that that's going to be
huge. Now, the part that I think is still lacking, when we went through COVID, the kids had to do
online Zoom call like classes. And to be quite honest with you said, it was disastrous.
Like, it was just, it was a train. Anybody who's gone to in-person education versus, you know,
learning through a computer screen, like there's some advantages for the computer screen, but there's
a lot of advantages to like in person. And I wonder if once you start getting into, you know,
I don't know that everybody would agree with this, but like maybe the humanoid robot AI is going to
maybe have a difference because you're having like a in person interaction. And so where are we in 10 or 15 or
20 years with respect to some of these ideas with AI learning? You know, once you put something
into the physical environment and you could go over to a chalkboard or you could go on a field day and
you can see whatever. I don't know. I think that the learning kind of takes on a whole new level
when you start incorporating it into physical space versus just always looking at something on a
computer screen. And maybe you can do that with a VR environment. I personally don't like these
things on my face. I find them to be highly annoying. But, you know, some people might like it or
learn that way as well. I think you're spot on. And I think that's a really important point to
kind of mention because I think that when we're talking about any of these points and any of these
tech podcasts we do, we're kind of talking about what is the newest technology.
and how is it impacting us. But there's always going to be pros and cons to everything that
kind of interacts with us and our world and how we show up. And what I think about when I think
about education is that the knowledge is one aspect of it. But when we're in school, when we're
around our peers, when we're interacting with our teacher, there is also the human connection.
And I think part of during our developmental years while we're in school, we're also learning
how to regulate our nervous system. And so we're co-regulating with the teacher. The teacher
when they're calm and grounded, that helps us learn. And so what does that do when we're
replacing these physical beings with this digital entity? Is that digital entity going to be
able to be emotional? Are they going to have that empathy and that capacity to support them? Or can you
just completely not replace that with a digital entity? Like, there's far more from a resonance
perspective that we're interacting with. And that's what I find really interesting. I wholeheartedly
agree. I'd tell you my wife, hearing me say, oh yeah, you're going to have an AI humanoid robot, like
teaching that she would just be disgusted by such a comment. And I think that there's a lot of
people out there that probably would be like, oh my God, that sounds like an absolute nightmare
of a future. And hey, maybe it is a nightmare if you, I don't know, but I can see the
demand for some of these things taking place because I think that the customized education that
you're going to get out of that is so far superior than some teacher that is an expert in history
and everything comes with a history lens and maybe the student that they're teaching hates history.
Can't stand history and then everything is flavored with history for an entire years.
You're sitting there with 30 other students and so I think that when we look back at the
education that you and I grew up with, it's going to be so archaic to where I think a lot of this
is going, whether people like that. Totally. And this famous, I think it's the Buddhist quote,
which is like, you shouldn't judge a fish by its ability to climb a tree.
And it's just like, I think the school system, I was 100% the fish.
I was probably even lesser of fish.
I was a rock just laying on the floor.
There was no way that I could climb a freaking tree.
And so I think that school to me, I never felt like I fit in.
But once I left school and I was able to find audio books, I was able to find my ability
to find how I learn.
Amen.
Oh, my God.
It was profound.
Amen.
So I think that how do we find this balance between, I think the biggest question is it's like,
How do we find this balance between like personalized learning while also having human connection?
And I think that's something that we really haven't found that balance yet.
I have conversations with people all the time and they ask me, oh, what kind of student?
Where are you present?
And to be honest with you, Seb, I hated school.
I hated it because I didn't feel like I was ever learning something that I actually had an interest in.
I was always being force fed this stuff that I had no interest in and it was never framed in a way that interested me.
Yeah.
And same experience.
after I got out of college, I just started reading things that I wanted to know more about.
And then I just loved it because I was focusing on things that I wanted to learn about.
And just truly never had that experience the whole time in high school.
And I mean, I enjoyed the engineering classes I had in college.
But beyond that, like all the other stuff that I literally had to take a poetry class in college with a bunch of, you know, how old were we?
18, 19 year old dudes all standing in, you know, a classroom reading shakes.
spirit to each other. Like, good God, shoot me. Like, it was the worst. It was the worst experience ever.
Because I don't like that stuff. I'm sorry. If you love, if you love that stuff, I'm sorry. I don't like
that stuff. And that's, that's to the whole point of this learning thing is like people can lean into,
finally lean into things that really interest them. And like, I can't even imagine what that's going to
do from if you start that out early and you do it for 20 years where the person is being led down a thing
that actually interests them and you're being taught by the world's greatest teacher on subject
X, Y, and Z that that person is interested in. I cannot even imagine what those results would
look like by the time they're 20 years old.
That's the thing, though. It's just trying to find that balance. I wholeheartedly agree.
And it's just like, how do we achieve that? How do we find this balance? The human touch
and technology, it's a challenging one. There's always given a take.
Yeah. I have a couple AI topics that I wanted to bring up. The first one, and I'm going
to put up here on the screen. This one's from Andre Caput.
Parthy, the very famous AI. He was head of AI at Tesla for a little bit. And then he was one of the
founders at Open AI. To be quite honest with you, of the YouTube videos I've watched of people
teaching AI, he is probably my favorite of anybody on the internet. He is such a great teacher
and he makes things so accessible. And he had this tweet and I just think that it's kind of really
interesting tweet and I think it's worthy of highlighting here. He says, don't think of LLM's large
language models as entities, but as simulators. For example, when exploring a topic, don't ask,
what do you think about X, Y, and Z? There's no you. Next time, try. What would be a good group of
people to explore X, Y, and Z? What would they say? The LLM can channel simulate many perspectives,
but it hasn't thought about X, Y, and Z for a while, and over time, informed its own opinions
in the way we're used to. If you force it via the use of you, it will give you something by adopting
a personality embedding vector implied by the statistics of its fine-tuning data and then simulate
that.
It's fine to do, but there's a lot less mystique to it than I find people naively attribute
to asking an AI.
So his big point here is don't say, what do you think?
He's suggesting people say, who would be the best group of people to have an opinion
on this?
And then ask the AI, what would this group of people think?
I find that to be useful and important.
and it gets to the heart of a person who's literally like programmed these things, he's getting
it something that is showing you a bias that will give you a cleaner and more accurate answer
that you're going after. So I think that that's important for people to understand. Any comments?
So we may talk about this if we have time and it's around kind of this idea of, it was a
diary of a CEO podcast that both of us have listened to. And something that he mentioned in
Dale that I thought was really interesting that kind of is in alignment with this, is this idea
that if we go on social media, our algorithm is giving us information based upon our interests.
It's not necessarily giving us the objective information. And what he also discussed is this
idea that if you go and ask AI, what does it think about maybe this controversial subject?
Depending on your location, if in one location, they have very different views, it's going to give
you that answer. If you're in another location, it's going to give you a different answer.
And so I think the thing that is interesting is that you've got to figure out how to prompt AI to give you the most objective answer.
And so this is something that I've personally found, and I'm curious to hear your thoughts, as I've been using AI, being able to give the AI a persona.
So saying like, hey, I want your perspective.
If you were, say, an expert developer that has a knowledge in X, Y, and Z, I want that perspective.
And so being able to look at it through a specific lens, and I don't think people ask AI to look at
something through a specific lens. And with that in mind, it's just going to give us what it
thinks we want to hear so that we're just more attentive to AI. Yeah. Yeah. And we are going to
play a couple of clips from that interview here later on in the show. But yeah, outstanding comment.
This next one's a little political. And I'm just, I'm bringing it up because I find it
kind of an interesting topic. The president has recently come out and he's trying to take away the
states rights. So we have 50 different states here in the U.S. and they all seem to be coming up with
their own AI laws and rules. And the president is now trying to say that there's going to be an
executive order on AI this week that forces all of the states to kind of fall under the federal
level AI mandate of like how we're going to go about this as a country. And as a person that
firmly believes in states' rights and pushing responsibility.
is much out of the federal government and down to the state level as possible.
This one pains me.
But at the same time, I'm also looking at it and I understand the logic that we're in a global
race and you're up against some other superpowers that are moving out and not in a situation
that, you know, one state is advantageous.
The other one is not.
Then the big tech companies set up shop and the one that's advantageous.
But they also are concerned about the whims of that local.
government shifting and changing, and then they have to move all of this infrastructure to
another state that now looks like it's more advantageous for doing business there. And I think what,
I suspect what's happening is the president is getting pressured by the big tech companies
to have something that's unilateral across the board. So they never have to think about,
my words, getting rug pulled in state XYZ so that they have to then move all of this huge amount of
CapEx into another state and the cost and expense and most importantly the time to make that
conversion over to a different state. So I think what they're doing is they're lobbying him to do this.
I'm curious what you think about some of this. I know this is a U.S. topic set, U.S. politics,
but I don't know. I can understand why the tech companies are doing this. They just were trying
to minimize risk to them. I suspect they're the ones that are pushing on this. But if you have any
comments. This regulation is something that I'm so torn on. And I think that going down the Bitcoin
rabbit hole, I think a lot of Bitcoin has found themselves in this position where they're like,
you know what, let's deregulate. We believe that the free market knows what is best. And ultimately,
if you want to move towards the society with growth, you don't want to impede that information,
making it to the free market. And I think the thing that I really struggle with is, I don't necessarily
think that it is black and white. It's either no regulation or regulation.
It's somewhere in between.
And like a perfect kind of example to me is something around the lines of,
you can have a corporation that can be capitalizing off the destruction of the environment
and they don't have to front that cost.
And so then does a regulator have to put something in place where there is going to be
a financial burden for them capitalizing off the natural environment because they're
making profit from destroying forest, extracting minerals, extracting XYZ from our natural
environment. And so I think that a lot of these corporations, if their bottom line is obviously
profit and there's no regulation, they're going to complete, they're going to continue to
extract from an actual environment. And so I think the same thing is true for kind of a lot of
these AI entities. Like, if their goal is how do we build the fastest learning, most powerful
AI engine, even if there is a risk at destroying the human race, we're just going to keep doing it.
And so it's just like, does there need to be regulation in place because the free market doesn't
have the capacity to push back on something like that. And I'm not sure. I don't have like a fully
formed opinion on it because I can kind of see it from both sides. I don't know what your thoughts
are. Yeah, from a state's rights standpoint, one of the advantages is it creates competition
between the states in that if, and let's not use AI, let's just use, you know, oil. Like, let's
say you're an oil refinery. Let's say that citizens of one state just really don't like it because
of what it does to the environment, what it does just from an aesthetic standpoint, whatever.
It doesn't matter.
Okay.
And then another state is like, oh, no, we don't care.
We want all that business to come in here.
We want all that commerce.
So we're going to be, you know, pro regulation around that particular industry.
So that's, and this is very hypothetical, we'll just kind of illustrate the pros and cons.
The company or the state that allows this to come in and just proliferate everywhere.
Let's just say that it's over the top.
And it's just one of the ugliest states in the United States to live where the other one that was more restrictive is a much more beautiful state.
And if you're the type of person that doesn't want to be looking at that kind of stuff, well, then you'd move.
You'd vote with your feet and move to the other state in the one that's more desirable for you.
Let's say there's a bunch of tax advantages in the one that is pro oil, right?
You don't want to be paying more money than you have to.
So you move into that state.
So in that scenario, you're creating competition for people to migrate to the same.
state that aligns with what it is they want out of their ecosystem that they're living in the
most. When it comes to AI, where this is a little bit different is the product that's being
built here. I don't know that it has a benefit or a negative for that state over another one
would be kind of the argument of why, hey, this is different because you're literally building
intelligence. I think you could make the argument from a data center standpoint in the footprint
that the data and that would be the argument is the data center and the footprint and the
size of that footprint and the energy consumption of that footprint would be concerning from
state to state. Some people might not want all of those data centers in their state for whatever
reason. Some might love it because of the energy infrastructure that's going to be built out
to service at all. So those are, you know, it's a hard one. It's a hard one. That's an interesting point
because I think the physical infrastructure, I don't believe that should be regulated on a
federal level. It's just like if a state wants to open up jobs, if they want to kind of have
more data centers, for sure. By all means, it's kind of that's their choice. However, when
the benefit or the negative effects of a technology not only impact that state, but they
potentially impact humanity or the nation, all of a sudden it's just like a state could be
making a decision that impacts far more than its locality. That's the thing that I find really,
really interesting. And so at what point do we need certain regulation because people are making
decisions that are far more detrimental on a bigger scale? And then the question is who is, and this is,
I think, what gets to the heart of regulation. It's just like, cool, okay, this is great if we can
have perfect regulation. But you've got to ask who is actually creating the regulation and who
is regulating the regulators to ensure the regulators are being fair? And where's the money coming from
to help fund this regulation.
And many times, like if you look at, I don't know, the pharmaceutical industry,
the pharmaceutical regulating agencies are funded by Big Pharma.
So it's just like there's these conflicts of interest.
And so that's where it gets so convoluted.
And again, we're going to talk about this Tristan Harris discussion later in the show.
And if you want to go down the regulatory policy path, and if you like this particular topic,
then I would highly encourage you to listen to that entire interview because it gets a
very heavy in this domain as to whether you should or shouldn't. And it's a very biased point of
view by the person being interviewed, which is Stephen Barlett. I'm sorry, Stephen Barlett's
doing the interview of Tristan Harris. Tristan obviously has a very strong opinion in that.
But I think that it's good just for a person to kind of hear that counter argument or that
argument and whether it's something that they agree with or not. So let's go to the last one that I
have on just the AI topics here. I'm going to put this up on the screen. And this is interesting
because we're talking about Google's Titans. This person had this post here. And so like,
what is this? Titans is a Google's new architecture type that gives a language model something like
a real long-term memory while the model is running. And so they have a chart up here and they're
showing how it's ingesting 10 million tokens and it's still maintaining around 70% accuracy, which I guess
is insane and it has some of the other models and what they do with 10 million tokens and
it's nowhere close to what Google has uncovered here with Titans. This one I found interesting
because I find myself wanting to just put really large documents into a really long context
window asking it to still perform and it gets laggy and clearly there's something missing
when it comes to long-term memory. So this paper came out in 2025. It didn't come out this past
week, but it did come out this year and the title or the subtitle of the paper is learning
to memorize at test time.
And some of the token sizes that you're putting through this thing, I was playing around
with it trying to understand how it works.
And I got this paragraph that I'm just going to read for folks to kind of like think through
how it's doing this.
Imagine you're scrolling through your social media feed and most posts are usually stuff
like memes or friends lunch picks.
Your brain like the AI expects that junk and mostly ignores or forget.
that it to save space.
But suddenly there's a post about a surprise concert ticket giveaway for your favorite band.
That surprise grabs your attention.
So you remember the details like the entry deadline and rules while letting the boring posts fade away.
In Titans, the AI uses a similar surprise signal from calculations based on gradients, the spot
unexpected or important info in the huge stream of data deciding to store that useful bit
in its long-term memory.
way, it keeps what's valuable for later tasks like answering questions without cluttering up
the irrelevant repeats.
So my immediate follow-on question for that was, okay, so then how does the AI know that
that concert ticket or that particular band is a surprise in the fee?
And what I got back was it's just looking at the sheer amount of training data that it was
trained on, which is the whole internet. And as it's going through said document that, you know,
has these millions and millions of tokens in size, it is finding something that is unique in reference
to the entire data set that it was trained on. And that gradient is what's allowing it to say,
oh, that was different than what I would have predicted or expected, so then it remembers it.
And what I find so fascinating, so years ago, I interviewed the author that covered
Claude Shannon.
And when we were covering information theory, I distinctly remember him saying, Preston,
it's just surprise.
Like, his algorithm, his mathematical algorithm is just looking for surprise.
And when it was going through this, Titans thing, I was just like, wow, this is literally
just like information theory in order to do like long-term memory.
It's just my blood, right?
It's fascinating, dude.
It's, what was it? There's that word in information theory, which is, if I send a letter to you,
there's going to be a lot of noise in that letter. Ultimately, you're looking for the surprise,
the new piece of information. And it was kind of the inverse of information. Information is noise.
What we're looking for is the signal in that noise. And so it's kind of like, if you've got a block of marble,
you're cutting away all that excess to find the statue inside. And I've got blanked on that.
But what comes to mind as you're talking about this is there's one of my favorite TED Talks
who've ever listened to.
And I think you've read the book is by a guy called Donald Hoffman.
And he talks about, do we see reality as it is?
Yeah.
And he gives the analogy, and I highly recommend anyone going out and listening to this,
just type in TED Talk, Donald Hoffman, do we see reality as it is?
So good.
And he mentions, there's this, it's like a Doom Beetle in Australia.
And this Doom Beetle has been around something like 300 million years.
So you would assume if it's been a...
around for 300 million years. Of course it seized reality as it is. But then the Australian
started throwing these beer bottles, these brown stubby beer bottles into the desert,
and all of a sudden this Doom Beetle nearly went extinct because it had effectively created
a hack, a rule of thumb in its brain that is, hey, the bigger, the browner, the better.
And it just goes and tries to mate with these brown beer bottles nearly went completely extinct.
So the Australian government had to step in, bam beer bottles, and all of a sudden that beetle
started to kind of recover. But what I find really interesting about this talk and what he's trying
to kind of get at is this idea that we are not optimized to see reality as it is. We're actually
optimized for survival of the fittest. And so what we do is we take in all this information,
we discard all the noise, and we try and create these rules of thumb to basically maximize
our chances of survival. So if there's a train coming towards us, we're not processing all this
information being like, what is this thing? I wonder how fast it is going. At what velocity does it
going to hit my body and we ignore all that data. We're just like, trains coming towards me,
I need to get out of the way. We've created a rule of thumb to recognize that this is dangerous.
Now, the reason why I say all of this is because AI, what is surprise to the AI? What information,
when it's looking at 10 million tokens, how does it know what is valuable? What is it actually
optimizing for? Because we're optimizing for survival of the fittest. What is it optimizing for?
And so that's where I think I'm curious to dig deeper into these models and try and understand, like, what are they trying to pull out?
Because either a developer has had to code that in.
This is the type of information that we're actually looking for or it's figuring that out itself.
And is that actually relevant?
And I'm curious to hear your take or your thoughts on that.
I don't know that I have enough information on how these gradients are determined.
I do know that I've seen posts by Elon Musk and others that really speak to the idea that.
that the training data set, like using the entire internet, every single thing that you can get,
isn't necessarily leading to the best model for task, X, Y, and C.
Is it going to train, is it going to do really well for understanding English or the language?
Yes.
Is it going to be most optimal if you're trying to understand, like, a medical discussion?
No.
So then you've got to get into like, okay, so we have an AI that understands the language perfectly
or near perfectly, and now we're going to apply that to a different model that then is more
focused on the training set as to like what we're putting in it. But all of that is way outside
of my depth of understanding as far as like how much research I've done on it. I have seen a lot of
conversations by heavy hitters in the space talking about curation of the data set and
making sure that you don't just put everything in there because it's somewhat disastrous
or how competitive it'll be in particular tasks.
I think that this is really important point, which is that I think it still falls on the individual
to determine what is valuable and what is not.
And I think that you can have this huge, huge research study that you're kind of a research
paper that you're looking at, and it's spitting out a whole bunch of information saying,
this is valid, this is valid, this is valid.
And to the average individual may be like, cool, this is rad, but to a scientific researcher
who understands the subject, he's able to, again, filter through that again and separate out
the signal from the noise. And I think that's where ultimately humans aren't going to be replaced
anytime soon because specialists in their field are able to see what is valuable and what is not.
It's just how, as Bitcoiners, we can see when there's a newspaper article or a New York Times
post or whatever that says, oh man, Bitcoin is consuming more energy than Argentina. It's just like,
well, that's misleading and that's not necessarily accurate. And so I think that you still need to be a
specialist and deeply understand the topic to understand the validity of the output from a lot of
these models.
Yeah.
Seb, let's go to our last topic for this show.
And we're running a little long.
So I think we're going to have to schedule another one to put some other topics in there
because I don't even think we're going to get to this Tristan Harris discussion.
But you wanted to talk about long distance haptic touch.
Go ahead and take this topic away.
Totally.
So this is something that, oh, man, it blew me aware.
I didn't even know this existed.
Essentially, I stumbled upon this post by Mario Norfeld.
I wonder if I can even open up the post and I'll show you guys.
So I was kind of scrolling Twitter the other day and I stumbled upon this post by Mario.
And basically what scientists have kind of created are these long distance haptic touch.
And so for those that aren't familiar with this word haptic, it's basically it's like,
how do we create flexible patches that bring touch into virtual reality, augmented reality, and such.
And so it gives a little bit of information in this post, but I started to dig a little deeper,
and I found the original scientific study, which kind of dove into this. The actual scientific study,
you can find here. And it was basically called skin attached haptic patch for versatile and
augmented tactile interaction. And so this is basically the abstract kind of says it's the first little bit.
It's like wearable tactile interfaces that can enhance immersive experiences in virtual
augmented reality systems by adding tactile simulation to the skin along with visual and auditory
information delivered to the user.
And so to me, what I found really, really fascinating about this is these little haptic touch patches,
essentially they are, if I remember correctly, they're 1.1 millimeters in size, and they're extremely
powerful for their size and they're able to create both pressure and like high frequency
vibrations. So if you were to wear a glove with these haptic touches on your fingers, you would
be able to, if someone else was to say wearing the same glove and they went and touched textures,
shapes, edges, letters, 3D surfaces, you could feel what they are feeling through these
haptic touch senses. And so what does that do? I think we could use this in so many different
ways. Either you could basically have an interaction with someone, let's say through Zoom, you and I are
talking right now. And if we went to hug or we went to communicate in the way that we wanted to
involve touch, I could feel what you were feeling and have a much more like three-dimensional
detailed interaction. But we could also look at it from the perspective of like virtual reality
and augmented reality where you'd be able to wear certain things, whether it's gloves or a suit,
and you'd be able to wear a VR goggles. You'd be inside a simulation and you could be interacting
with the simulation, not just from the visual sense, but also from the physical touch.
sense. And that to me, I think, is mind-blown. So I'm curious to hear your thoughts on kind of
this haptic touch sensor. Yeah. So while Sub was talking there, for people that are just listening
to the audio, I put up a video of a company that's called Fluid Reality. And the company
is putting these haptic sensors. And the way that they're using that particular company is doing
is they're trying to give the humanoid robots better sensing of how they're feeling different
objects and whether they should be squeezing an apple very hard or very lightly as they're
interacting with it.
And then you can see up on the screen right now, if you're seeing the video of a person
that's wearing a glove that's training one of these humanoid hands with the sensing
capability that Seb was describing.
And when we just look at the robot's ability to do certain tasks, like let's say it's
doing laundry or it's doing whatever, you can very quickly realize that the amount of
pressure that it's applying to the objects that it's interacting with become really important.
It becomes important from a power management standpoint, from just not breaking the object that
it's holding or damaging it.
And I think that haptics are a huge part of humanoid robots and where a lot of that's going
to be going.
And this is something that is definitely worth paying attention to.
And I have a couple more videos here.
I'm just going to quickly put up on the screen for people to see.
And this is the Tesla robot for people that are just on the audio.
showing you how like crazy accurate the hand gestures and the mechanics in the hands are.
And I don't know where they're at from a haptic pressure feedback standpoint on this particular
humanoid robot, but by the looks of the hand gestures, it looks like it has a lot of sensitivity
built into it.
I think this is one of, from my understanding, haptic touch is one of the biggest hurdles
that I think robotics is trying to overcome right now.
it's one thing to have a robot that is a forklift truck, that is just going around doing its
own thing, picking up containers, moving those containers, dropping those containers.
But it's another thing to have a robot that, say, operates in the kitchen and can pick up an egg
without crushing it or to be able to do more pressure-sensitive things like use a screw driver
and understand when it's starting to strip the screw or starting to do things like
surgery or sewing and cooking and things where ultimately I think there's a lot more fine
motor skills, then we really take into account as humans.
Like, we are being given so much information through our hands, through our sensory touch,
and making decisions on that information that we're doing autonomously.
And being able to bring that into the robotic world, I think we're still a little ways
away.
Because from my understanding, a lot of these haptic touch sensors can cost anywhere from like
$10,000 to $50,000 for a set of hands that are able to do things like this.
So we're a long way off having this available to everybody in that.
household. But to your point earlier, if we get a 10x improvement, a 10, like a 90% reduction
in price and all of a sudden, you can be doing this for a few thousand dollars. I think it's
going to be far, far easier. But again, I just think it's so fascinating, both on the robotic
side of things and on the individual side of things. And so one thing that I just wanted to kind
of bring up, and I'm curious to your thoughts, we had a, when was it, in March of this year,
we were skiing in Jackson Hole together. And we ended up having a conversation, I know if you
remember on are we in a simulation?
We were kind of sat at a table and we had this conversation of are we in a simulation.
And this idea of is AI and say some of the robotics we're using today, is it being created
or is it being rediscovered?
As then it already exists.
Yeah.
And so when I think about haptic touch, what comes to mind is the movie Avatar.
And I think about as humans, we want to feel a part of a community.
We want to create value.
We want to feel like we've got purpose.
And so what do we do? We start to create products. We start to create technology. We start to kind of
create companies. And we go out into this world. In response, we create advancements and productivity and
efficiency and such. And we start creating technology that starts to replace us. So all of a sudden,
when we start to replace ourselves, we're losing that sense of purpose. We're losing that sense
of like community where we feel like we're creating value. And so at that point, if society is degrading
and all of a sudden there's rising rates to depression and suicide and substance abuse and all of these things,
what do we do as a community?
Well, if technology is advanced enough,
you could create a simulation with haptic touch, like Avatar,
where we're able to step in to a world
and move back to a much simpler time.
And back to that simpler time where we can start to create value again.
And then we go and do it again and again,
and it's like how many times have we stepped into a potential simulation?
Now, I'm not necessarily saying this is what I believe,
but I think it's an interesting thought because we are at a point
where it wouldn't surprise me from the next 30, 40 years, we're able to create simulations
that feel unbelievably realistic and we can't differentiate between the physical world
and more of these simulations.
It's one of Elon Musk's biggest talking points when this topic of simulation theory
comes up is exactly what you described.
But we'll leave you with that thought experiment.
I don't know, right?
I don't have an opinion.
But I do find it to be a fascinating.
thought experiment and fun to kind of just tease out.
How would we know?
The pace, well, the other thing that I find fascinating are all these AI environments that
are just kind of ad hoc making up.
They're showing videos of these things.
I don't know if we've ever put it on the screen of one of the podcasts, but maybe we did earlier,
but it's just making up this environment and you're watching this video of somebody like walking
down a street and it looks completely real.
And it's just being made up on the fly by AI.
And then there's a bit of a memory component that if you turn around and that you saw a tree
just a couple minutes ago, that if you turn around and start walking back the other direction,
the tree will still be there for a certain amount of time.
And yeah, like, it's just that environment's being made up.
And so I can't imagine in 20 years where this is.
And then the memory recall of like what's been experienced in this made up world that somebody's
going around and sensing in virtual reality.
And I guess I say all this because it's teasing out this idea that you bring up, which is like, how do you know what is real and what you're experiencing is reality?
Because, you know, you go into any of these theoretical physics conversations and they'll tell you that what you think you know is not that at all.
That's very, very, very different from a quantum mechanics standpoint.
But anyway.
Absolutely.
And we've spoken about it on previous episodes where Navidia is creating, I think it's Cosmos, which is there.
their like simulation so that people are able to test robotics in simulated world before they
enter the physical world.
And these simulated worlds are getting so realistic now.
They have fluid dynamics.
They have gravity.
They're able to apply all of these various like physics pressures and such.
And so I think that the world we're living in is really, really fascinating.
We're at a point where we're kind of joining the physical and the digital world to the point
where it's just like, are we going to be able to separate them?
Because I do think that there's these teenagers, there's kids that are growing up in this new world,
and sometimes they get confused between what is real and what's not because of that's constantly interacting socially, digitally, everything emotionally in the digital world.
Yeah.
All right, guys, we're going to wrap here.
I hope you guys enjoyed the conversation.
So we'll probably do this again in like two weeks to kind of cover the other topics that we didn't even get to.
Most importantly, we want to cover this Tristan Harris discussion on the.
Diary of a CEO, Stephen Bartlett's podcast, a couple different topics that were brought up there.
We want to discuss on the show in addition to more tech topics.
If you guys are loving this, let us know in the comments, we're having fun.
Hopefully you're having fun here in some of the different topics that we're bringing to you.
And if you have any recommendations of things you want to hear, bring them to us on X and we'll
be sure to try to incorporate it into the next show that we do.
So, Seb, give people a handoff to your book or anything else that you want to highlight that's
out there.
And then we'll go ahead and wrap.
Absolutely. And again, like, we really appreciate anyone who just kind of takes the time to give these episodes a lesson.
So if at any point you're just like, oh man, I'd love to hear you guys' perspective on this technology or that technology,
feel free to just like share it in the comments. And yeah, we really want to go to talk about what's happening in the world today and just kind of share our perspective.
And yeah, you can find me at Seb Bunny and that's BUN and EY on Twitter. And my book, The Hidden Cost of Money or B is for Bitcoin.
But again, appreciate everyone giving this a lesson.
Thank you for listening to TIP. Make sure to follow Infinite.
on your favorite podcast app and never miss out on our episodes.
To access our show notes and courses, go to theinvestorspodcast.com.
This show is for entertainment purposes only.
Before making any decisions, consult a professional.
This show is copyrighted by the Investors Podcast Network.
Written permissions must be granted before syndication or rebroadcasting.
