We Study Billionaires - The Investor’s Podcast Network - TECH004: Sam Altman & the Rise of OpenAI w/ Seb Bunney
Episode Date: October 8, 2025Seb and Preston analyze the book "Empire of AI," reflecting on Sam Altman’s rise and OpenAI’s transformation from a nonprofit into a powerhouse AI firm. IN THIS EPISODE YOU’LL LEARN: 00:00 ...- Intro 00:04 - Why Sam Altman’s early ventures shaped his leadership style 00:09 - How storytelling plays a role in securing AI funding and public trust 00:11 - The founding vision behind OpenAI and Elon Musk’s original role 00:18 - The internal power struggles that led to Altman’s firing and reinstatement 00:20 - The significance of AI governance structures in shaping future technologies 00:28 - How OpenAI evolved from a non-profit to a capped-profit model 00:33 - Why AGI poses ethical and societal challenges 00:39 - The hidden costs and global inequalities in AI model training 01:00 - A sneak peek into longevity research and Lifespan by David Sinclair 01:01 - Why ancestral health might hold keys to understanding aging BOOKS AND RESOURCES Related book: Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. Seb’s Website and book: The Hidden Cost of Money. Seb's Blog: The Qi of Self-Sovereignty. Next book: Lifespan: Why We Age―and Why We Don't Have To. Related books mentioned in the podcast. Ad-free episodes on our Premium Feed. NEW TO THE SHOW? Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members. Follow our official social media accounts: X (Twitter) | LinkedIn | | Instagram | Facebook | TikTok. Check out our Bitcoin Fundamentals Starter Packs. Browse through all our episodes (complete with transcripts) here. Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool. Enjoy exclusive perks from our favorite Apps and Services. Get smarter about valuing businesses in just a few minutes each week through our newsletter, The Intrinsic Value Newsletter. Learn how to better start, manage, and grow your business with the best business podcasts. SPONSORS Support our free podcast by supporting our sponsors: Simple Mining Human Rights Foundation Kubera HardBlock LinkedIn Talent Solutions Unchained Vanta Shopify NetSuite Onramp Public.com Abundant Mines Horizon Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm
Transcript
Discussion (0)
You're listening to TIP.
Hey everyone, welcome to this Wednesday's release of Infinite Tech.
Today, Seb Bunny and I dive into Karen Howe's book, Empire of AI, dreams and nightmares
in Sam Altman's OpenAI.
We trace Sam Altman's rise from his early startup and Y Combinator days to the founding
of Open AI with Elon Musk and the company's transformation from a nonprofit ideal to a
Microsoft-backed powerhouse.
Along the way, we unpacked the famous blip, where Sam
got fired back in 2023, OpenAI's complex governance, the broader ethical questions raised
by AGI. And guys, this is surely an episode you won't want to miss. So without further ado,
let's jump right into the book. You're listening to Infinite Tech by the Investors Podcast Network,
hosted by Preston Pish. We explore Bitcoin, AI, robotics, longevity, and other exponential
technologies through a lens of abundance and sound money. Join us as we through the week through
connect the breakthroughs shaping the next decade and beyond, empowering you to harness the future
today. And now, here's your host, Preston Pish.
Hey, everyone. Welcome to the show. I'm here with the one and only Seb Bunny, and we are talking
OpenAI, Sam Altman. What in the world has gone on there at this company? Where's it going?
Where did it come from? And we have a book that we read together. And we'll be using that somewhat as
the framework, but also kind of going in other directions beyond just the book. And Seb, welcome
to the show, sir. Oh, man, thanks having me on Preston. And you know what I found really fascinating
is, so for those that didn't listen to our previous episode where we discussed the thinking machine,
and that's kind of the rise of Navidia and Jensen Huang and kind of essentially how Navidia
laid the foundation for Open AI and neural nets, which is kind of the technical term for the
foundation of what these large language models are built on, it really kind of set the stage
for reading this book. I super enjoyed it. And maybe the point that I'll just kind of quickly
share is that I had no idea for what extent Navidia really did pave the world of AI in that.
We went from CPUs, central processing units, to Navidia creating GPUs, graphics processing
units, which enabled parallel processing, computing tons of data, and that basically set the stage.
And so it was really cool going from that first book into the second book because I think it
helped with the depth of understanding.
Yeah, 100%.
And it's interesting because I don't know if you've seen the clip of San Alton and Jensen Huang.
And there was another gentleman there talking about all this investment that they're doing
to the tune of hundreds and hundreds of billions of dollars and how people are like, okay,
so how are they financing this?
And it looks like it's going in one person's hand and then into the other person's hand.
It's like the circular financing of all of it.
But that aside, let's go ahead and jump into this.
Okay.
So the name of the book that we read was called Empire of AI, Dreams and Nightmares of Sam Altman's
Open AI.
And this was written by Karen Howell.
The book was good.
The book had parts that like really got my attention.
There's other parts of the book where I was like, whew, a little brutal, a little woke.
But other than that, like, we're going to kind of go through the timeline and kind of
educate people on the rise of Sam Altman, what they're doing there at Open AI.
You know, we'll give our overview on the things we love, the things we hated, and we'll go
from there.
So, Seb, any opening comments, anything different than what I'm curious if you kind of saw
it the same way.
We're in the middle of the book where some of this woke stuff.
Exactly the same way.
I think to start the book, like very much grabbed my attention.
Yeah.
I've really, really enjoyed it.
And it started out, as we'll kind of get into, really discussing like Sam, the rise
of Open AI. And I think some of the stuff that you don't necessarily hear in the media
about kind of the construction of AI and such and the relationships that it's kind of built upon.
And so I found that really, really fascinating. But it definitely in the middle of the book,
got a little woke, got into some of the gender stuff and the environmental stuff.
But ultimately, it was an interesting book, for sure. Yeah. Okay. So let's go through
just basically Sam Altman's life, because I find this pretty interesting and it also kind of helps
frame things of like maybe where he's coming from. And this is not the arc of the book. I'm just going
to start off kind of talking about Sam, kind of giving people that background. So early in his life,
grew up in St. Louis, learned how to code at a young age. By 2000, he goes to Stanford and is doing
computer science, but then drops out to start a company. He starts his company. This is around the 2005
timeframe. It was called Looped. And he co-founded this. And it's a location sharing social
app, which I found kind of interesting that that's where he starts, right? He raised venture capital,
became part of this early mobile wave. Loop never gained a lot of mass traction. He did sell it in
2012 for $43 million. And this gave him, you know, some credibility in the tech founder startup
World.
2011 to 2019, he first joined Y Combinator.
I'm sure people have heard about Y Combinator a lot.
Think of this as like a, this was Paul Graham that was the president of Y Combinator
when he came in there.
And this is an incubator that founded many or, you know, assisted in the founding of many
of these early startups.
And some of these are like Airbnb, Stripe, Dropbox, a ton of companies came
out of Y Combinator. And so he built a reputation here at Y Combinator. He goes in there, he joins as a
part-time partner at Y Combinator and just kind of made a reputation with himself with Paul Graham
and was very well liked by him. And in the book, it talks about how he's just really good at politically
putting himself into different situations and being extremely liked. If he wants to be extremely
liked and kind of rose to the top at Y Combinator and eventually became the president at Y Combinator.
I'm trying to think of the year that that happened.
I'm not necessarily remembering, but his time at Y Combinator was from 2011 to 2019.
So somewhere in the middle, Paul made him the president a Y Combinator.
And this was a really big thing out in the valley because here's a guy.
He does have one win under his belt, if you will, by selling his company for $43 million.
And then he steps into this role and is literally the guy kind of pulling the strings as to all these major startups and founders that are moving through this organization Y Combinator.
I'm going to pause there.
Seb, anything else you want to add or throw in based on the timeline so far or just keep rolling?
I think you're spot on.
I think it's really fascinating because there's this kind of juxtaposition throughout the book where it's got there's a handful of individuals that basically say Sam is injured.
Like, just his depth of knowledge, his connections to people that I think he very much formed
through Y Combinator is bar none.
And then you've also got this other side, which we'll get into, where there's a bit of questioning
the legitimacy of some of these beliefs.
And there's one quote that I'll quickly read out that I think really stood out to me throughout
the book.
And it's this guy Rauston, who's an employee at Open AI, and he says, Sam can tell a tale that
you want to be a part of.
That is compelling and that seems real.
seems even likely. He likens it to Steve's Jobs' reality distortion field. Steve could tell
a story that overwhelmed any other part of your reality, he says, whether there was a distortion
of reality or it became a reality. Because remember, the thing about Steve Jobs is he actually
built stuff that did change your reality. It wasn't just distortion. It was real, which kind
of there's this hint of is what Sam is creating. Is it real or is it simply just a distortion?
And so this is kind of this conflict, which we'll see as we go throughout the book.
Anyway, I thought that was an interesting quote.
I love the quote.
And I think that this is something that founders of businesses, they see this, they see a vision of something that they think can happen.
But obviously is way out there or else you wouldn't have the 10 to 100 X to 1,000 X move of going from nothing to the 1 that it becomes.
And that's the name of 0 to 1 is Peter Thiel's book.
and talks about this idea.
But it's almost like there's this innate draw for a person who can not only just see the future,
they have the ability to kind of assemble a team, to lead a team, to motivate a team,
to build it out.
But in the early days when it's nothing, they speak in a way that makes it feel like it's real right now
or that it's completely possible in order to get the funding.
because, you know, you get into the seed phase or like this Y combinator phase, the incubator phase of these businesses. And there's nothing there. They're PowerPoint slides for the for the most part. And it's a lot of hand-waving. It's a lot of, hey, it can be this. And it almost seems like the ones that are super good at this are amazing storytellers. They're able to capture the attention of venture capitalists and people that would allocate funds to them. And it seems like the reality distortion field,
is somewhat, I'm curious, people that are, you know, have been around the VC space or the
early stage startup space can maybe attest to this. I think it's valid that it's almost this force
or this natural innate. What's the word I'm looking for, Seb? It almost seems like it comes
with the territory, I guess, is where I'm going, right? Absolutely. And I think a quote that kind
it pops to mind is being early is the same as being wrong. I think sometimes you can have this
idea in your mind about how you think the future is going to kind of plan out. But in reality,
if the technology doesn't evolve quick enough, essentially you're wrong. And so I think that
sometimes you can kind of distort this, you can tell this story that seems very, very real.
And you've also got to hope that technology catches up or keeps up with this idea. And I think
What comes to mind again, bring it back to the thinking machine in Navidia as he talks about
how at the rise of Navidia, he created these chips to enable far more graphic, intensive games.
But at the time, the rest of the hardware, the computers weren't able to process it, so it just
kept on crashing the computers.
And so it looks like a failure of Navidia.
But in reality, it was the rest of the world had not caught up to this idea.
His distortion hadn't kind of mapped out into reality just yet.
Well, you could even say that about the AI space.
I mean, neural nets aren't something that were new in the past five to 10 years.
This was stuff that was being done in the 90s and 80s where they had the idea of building a neural net.
They didn't have the capacity of processing to really kind of take it there.
And I think the attention part of it, the transformer part of it, was also lacking.
Like somebody hadn't figured that out yet.
And if I remember, right, that was around the 2017 timeframe that that happened.
So, like, you can have these ideas, but if the rest of the market isn't there, the market
demand isn't there, or the technical feasibility isn't there, like, you're just dead on arrival.
You're just the brilliant person with a great idea, but nothing to actually, like, make it into fruition.
So the part that I want to, like, pause right here and get into because this really gets into
the founding of Open AI itself, and that happened in 2015.
Evidently, there was this engagement between Elon Musk and the founders at Google.
some dinner party or something like that. And the Google guys had recently acquired Demise Hespis,
I think is how you pronounce his last name, from Deep Mind. They purchased his company for a couple
hundred million dollars. I can't remember the exact number. And basically bolted it on that Google
as like their premier AI research arm fully owned operational subsidiary inside of Google.
And this is when we were really starting to see AI start to start to.
seemed like it was something. Like, he was one of the leading people in the world that was doing this.
And there was this dinner that then happened between Elon Musk and the founders of Google.
And it came down to this conversation where Elon got in a heated debate with these guys.
And I forget if it was, I don't think it was Larry Page. I think it was the other one that said to
Elon, he goes, yeah, you're just a specious because what they were doing is they were arguing over
whether AI would dominate humans and become the new apex predator of the world. And Elon was so
taken back by the comment, like, well, of course I'm a specious. Like, do you really want to be
ruled and dominated by something other than that's non-human? Like, are you out of your mind?
And this conversation really, it was Sergei Britton, sorry to forget the name there for a second,
But Elon was just like, what in the world are these crazies talking about, meaning like,
why won't you let life or, you know, a superior form of intelligence take over?
Like, you're trying to get in the way of like the natural progression of intelligence.
And so following this dinner, and I've heard different clips where Elon has talked about this.
I'm saying, I'm assuming you have as well out there in the media.
But evidently, this event was the thing that just Elon was like, what in the way?
world. Like, we need a competitor that is going to try to build AI in a responsible way that's aligned
with human interest that's not going to try to take us over and treat us like we're pets,
like household pets. And so this is where Elon and Sam start to connect. This is where the whole
founding of Open AI. And the open in there stands for open source, as many know. But this is where
they did this. And they had this fancy dinner. They got all together. Elon and
Sam and then Sam was bringing in a lot of people from Y Combinator to kind of really kind of
piece this together of like, how can we do this? How can we build some type of competitor to what
Google is doing over there with deep mind? And, you know, their mission was safe AGI for all of
humanity and they were trying to organize it from a governance standpoint so that that was the
leading principle of the entire thing. Do you remember what Elon's initial investment?
was, I can't remember off the top of my head to basically fund this and to get it going,
because he was, this is the irony.
He's the co-founder.
Elon Musk is the co-founder of Open AI with Sam and Greg Brockman and I think another
couple people.
But as far as funding goes, I think Elon was like the primary guy funding this thing.
Absolutely.
He was the primary guy and I can't remember.
It was definitely like in the billions.
And one thing that I really just wanted to highlight is that Open AI very much started
out with it is a non-profit. It is not a for-profit. It is purely mission driven. No profit motive,
full openness, like essentially, and it mentions it a couple times, like Sam did not want
AGI or artificial general intelligence in the hands of a centralized entity like Google.
And it says, like Google, a couple of times in the book. So I found that really fascinating.
But their whole goal was we need to make sure this technology when we get there, not if we get there,
When we get to artificial general intelligence is open source and available for everyone.
Let's take a quick break and hear from today's sponsors.
All right. I want you guys to imagine spending three days in Oslo at the height of the summer.
You've got long days of daylight, incredible food, floating saunas on the Oslo Fjord,
and every conversation you have is with people who are actually shaping the future.
That's what the Oslo Freedom Forum is.
From June 1st through the 3rd, 2026, the Oslo Freedom Forum is,
entering its 18th year bringing together activists, technologists, journalists, investors, and builders
from all over the world, many of them operating on the front lines of history. This is where you hear
firsthand stories from people using Bitcoin to survive currency collapse, using AI to expose human
rights abuses, and building technology under censorship and authoritarian pressures. These aren't abstract
ideas. These are tools real people are using right now. You'll be in the room with about 2,000
extraordinary individuals, dissidents, founders, philanthropists, policymakers, the kind of people
you don't just listen to but end up having dinner with. Over three days, you'll experience
powerful mainstage talks, hands-on workshops on freedom tech, and financial sovereignty,
immersive art installations, and conversations that continue long after the sessions end. And it's
all happening in Oslo in June. If this sounds like your kind of room, well, you're in luck because
you can attend in person. Standard and patron passes are available at Osloof Freedomform.com,
with patron passes offering deep access, private events, and small group time with the speakers.
The Oslo Freedom Forum isn't just a conference, it's a place where ideas meet reality and
where the future is being built by people living it.
If you run a business, you've probably had the same thought lately.
How do we make AI useful in the real world? Because the upside is huge, but guessing your way
into it is a risky move. With NetSuite by Oracle, you can put AI to work today. NetSuite is the number
one AI cloud ERP, trusted by over 43,000 businesses. It pulls your financials, inventory, commerce,
HR, and CRM into one unified system. And that connected data is what makes your AI smarter.
It can automate routine work, surface actionable insights, and help you cut costs while
making fast AI-powered decisions with confidence. And now with the NetSuite AI connector, you can use
the AI of your choice to connect directly to your real business data. This isn't some add-on,
it's AI built into the system that runs your business. And whether your company does millions
or even hundreds of millions, NetSuite helps you stay ahead. If your revenues are at least in the
seven figures, get their free business guide, demystifying AI at netsuite.com slash study. The guide is free
to you at netseweet.com slash study. NetSuite.com slash study. When I started my own side business,
it suddenly felt like I had to become 10 different people overnight wearing many different hats.
Starting something from scratch can feel exciting, but also incredibly overwhelming and lonely.
That's why having the right tools matters. For millions of businesses, that tool is Shopify.
Shopify is the commerce platform behind millions of businesses around the world.
and 10% of all e-commerce in the U.S. from brands just getting started to household names.
It gives you everything you need in one place, from inventory to payments to analytics.
So you're not juggling a bunch of different platforms.
You can build a beautiful online store with hundreds of ready-to-use templates,
and Shopify is packed with helpful AI tools that write product descriptions
and even enhance your product photography.
Plus, if you ever get stuck, they've got award-winning 24-7 customer support.
Start your business today with the industry's best business partner, Shopify, and start hearing
sign up for your $1 per month trial today at Shopify.com slash WSB.
Go to Shopify.com slash WSB.
That's Shopify.com slash WSB.
All right.
Back to the show.
Okay, so I just looked it up.
So Elon's initial commitment was part of a $1 billion pledge.
And his actual outlays ended up being $50 million.
So it was a billion over a certain period of time, which was what he pledged.
Oh, no, I take that.
But importantly, that was a pledge, not upfront cash.
The actual money spent in the first year was much closer to $130 million, Musk himself.
That's reported, yeah, there's a number between $130 million and $50 million of what he actually contributed, but the initial pledge was for a billion.
So he was there, like at the start of all of this, which I think is lost on a lot of people.
especially when you see the back and forth and the animosity that these two have for each other.
And you're kind of maybe wondering why. And Musk now has XAI, as everybody's well aware.
But that's why is because he was the guy, like writing the checks in the early days.
And it was really kind of the one that led the charge as to why this was needed and why it needed to have the openness to it from the get-go.
So kind of continuing on the timeline. So Sam ends up leaving Y Combinator to focus for
time on OpenAI around the 2019 timeframe. Then they also negotiated a landmark deal with Microsoft
for a billion of investment. Sam oversaw GPT2, which I would argue really was before you had,
you know, before it became a household name was GPT2. Then I would say GPT3 is when this really
became a household name and everybody started talking about it. That timeline is right around like
late 2022, I would say, is where that is. So then it really breaks out. 2022 to 2023. This is where
GPT3 transitions into GPT4. It's getting into like Bing AI. And you got all sorts of partnerships
that are then coming out of Open AI. And I would argue this is where Sam Altman really becomes a
household name. And pretty much everybody knows who he is at this point because there's so many
people using this service around the world. Finally, the last thing I think that we should kind of
like hit is in 2023. There was this mass of the in the book, she calls it the blip. And the blip
was Sam being fired from the board of Open AI. And everybody just being insanely confused as to like
why, what happened. So much drama. This lasted for weeks. I mean, I remember watching this
on X and just seeing the fallout was crazy. And before reading this book, I would argue I still
didn't understand what it was all about. And I think most people are very confused what it was all
about. And, you know, Seven, I will get into trying to define that because it's actually pretty
complex. But we'll cover that in a lot of detail here coming up. But that was probably my favorite
part in the book, if I have to be honest. And the author opens up the book with the blip in covering this
it to kind of like grab your attention. And then throughout the book, she kind of talks about it a lot more
here and there, but still, it wasn't like very cleanly discussed. So something I would like to do in the
show today is kind of cleanly go through why he was fired or at least why we think he was
fired and what that whole thing was about. But yeah, okay, so that's kind of the timeline.
Seb, anything else you want to add as we kind of wrapped up the timeline?
Yeah, and I'm curious to hear your thoughts. I tend to think, like, if I was to simply
the firing down into two threads. I would lean on the first one being there was definitely,
there was a distrust of Sam inside of the company. And I'm sure we'll dive into it. There was a distrust
where some people questioned his intent behind some of the words that came out of his mouth.
And I think that's kind of like one of the first threads. And the second thread is this idea
that Open AI was founded as we discussed on the premise of being a non-profit. And you'll see as we
kind of get into it, it really, the idea and the mission changed over time and it changed drastically
and then essentially even post the writing of this book, they proposed to convert into
a for-profit public benefit corporation. And so you've seen this company go from essentially
non-profit all the way to essentially a for-profit company and trying to find that balance
between those two. And so I think those are kind of the two threads that I tend to lean on as to why
we saw this firing. But I'm curious to hear your thoughts.
Yeah, I would break it down into a couple different vectors that kind of were like just pulling the board apart.
I definitely agree with everything you said there.
First of all, the thing that was very strange about this company, this nonprofit, what do you want to call it, Seb?
This thing, this entity was the governance structure right from the start.
So unlike most boards and most governing documents for an entity,
this was set up in a way that the language gave the board the ability to destroy itself and dismantle itself, which is very strange.
Like, you don't ever see that with any business or entity is we might become so powerful that we need to kill ourselves is basically kind of the way that this was constructed.
It also got into like the ability to remove.
the board has the ability to remove anybody within the governance.
And like all these like really weird situations or what would you call it from the board,
I don't know the proper terminology, but the board had this ability to go in there and dismantle
itself in many different weird and strange ways.
So I'd say that would be the first thing was just the governance of the board and how it was
constructed.
The other thing that I think was huge is one of the guiding principles of when it was founded
was safety.
But then you get in this really weird dynamic of if they don't go fast enough and somebody
else wins, now they can make the argument that they're not being safe by going too slow
because somebody else will beat them and achieve AGI before them.
And that's dangerous.
So think of this catch-22 strange, like, when is that ever the case, right?
is because you're literally designing super intelligence or you're trying to achieve super intelligence.
And so what comes with that is this quandary of if we don't go fast enough, we might actually
manifest our concern in the first place, which is that somebody else is going to build it faster
than us.
So that dynamic was at play because some people inside of the organization, some people within
the board are saying, we need to slow down.
We need to go about this a whole lot safer than what we're doing.
we're being precarious, we're running with scissors, whatever.
And the counter argument is, well, if we're not going this fast, somebody else in China
or somebody else wherever is going to go faster.
The next thing that I think kind of played into this, and Seb, feel free to add anything
on to any of these points as I'm going through it.
The next thing I would say is just the transparency and trust issues that you brought up
Seb with Sam himself.
And I think what they found as they were going through this is everybody,
Everybody's a spy.
Everybody's, you know, working for you one day and then trying to use that as a bargaining
chip to go work for Google or whoever the next day and take the secret sauce of what they're
developing over to these other places.
So Sam, as the person sitting at the top, and I'm not trying to defend it, I'm just trying
to like talk through, like, how do you manage that to control the industry secrets that you're
producing without them getting out. And what you do is you end up compartmentalizing information
within the organization. Well, what does that do? It leads to trust issues. Naturally, there's trust
issues. So that was the next thing that kept kind of coming out and getting expressed is like people,
you know, that are working for Sam are saying, I don't trust this guy. They're doing things over here.
They're not talking. The left hand's not talking to the right hand. And I don't trust them because he's
withholding things. So that percolates up into the board discussion. I think the other thing that
was big was this power concentration of Microsoft and OpenAI. And everybody just seeing that what they
set off initially to do, which was keep the whole thing open source and the way that it would be
funded wasn't going to be like there's this industry partner that's for-profit industry partner
that's breathing down their throat. And that was Microsoft, leading up into the 2023 blip or Sam was
fired. This became a massive talking point in the market was Microsoft basically owns OpenAI
at this point, was the talking point. So there was people on the board that were looking at that
and saying, this is becoming disastrous. So for people that are looking at that, why would say, again,
I'm not trying to defend Sam. I'm just trying to like lay out all the pieces here. As people are
are looking at, well, why would Sam do that? Well, when you look at for them to scale, the thing that they
quickly understood was if we can just get more Nvidia chips and put more power on the grid to
these chips and we can feed it more data, the thing just gets smarter. That's the basic. I mean,
it's way more complex than that, but I'm just kind of oversimplifying. And so what does that take?
it's crazy amounts of CAPEX.
It's crazy amounts of investment dollars.
And if you think you're going to be able to raise that in a nonprofit kind of way,
against, and again, you have to look at, well, who's your competitor in this?
And is that how they're doing it?
And the answer is, I've got multiple competitors and none of them are doing it that way.
He's looking again, going back to the safety thing.
If we go too slow, we're literally accomplishing nothing and we're not putting the safest model into the world.
So he has to partner, from his point of view, he has to partner with somebody that can bring
him the capital for these massive CAPEX expenditures to train these future models.
So that was another big piece.
You know what?
What comes to mind is just saying that as well is, again, there's a quote that stood out to me
that was what Open AI did never could have happened anywhere but Silicon Valley.
He said, in China, which rivals the US and AI talent, no team of researchers and engineers,
no matter how impressive would ever get $1 billion, let alone 10 times more to develop
massively expensive technology without an articulated vision of exactly what it would look
like and what it would be good for.
And I think this is an interesting point.
Like where Open AI kind of came to be, arguably it couldn't have happened anywhere else
in the world, which I find really fascinating as well.
Yeah.
So long story short, like there was just a lot of dichotomy kind of playing out where it's
like, I don't know how to really put that on.
Everybody wants like a really simple, like, this is what it was.
Sam did whatever.
And that was why he was fired.
I think it's just way more complex than that.
I think that there was just so many vectors kind of just pulling that board in so many different
directions.
And they're looking at Altman as being the guy, ultimately on the controls of the company.
And they're like, we've got to get rid of this guy because there's just too many things that are
complete opposites of what we initially set out to do.
Whether that's the ground truth or not, I don't know.
But that's, you know, how lays this out in the book.
And those were the key things that I was kind of able to pick out and kind of say, I think
this is what it is.
But, you know, in the comments, if we have any opening high people listening, please comment.
I would love to hear an outsider or insider's perspective on what you might think that this
is.
And to be fair as well, to Sam, I would say the book doesn't paint Sam necessarily in a positive
of light.
At all.
Yeah.
I would argue that, and this isn't to side of Sam, but I would say that until you put yourself
in that position and you put yourself out into the market, I think it's harder to really understand
why he made the decisions at which he made.
Now, being absolutely transparent, there are many decisions which the book goes into, which
makes you question maybe some of Sam's integrity and some of the things he does.
However, I think that it is a lot more convoluted than that.
So I think like maybe diving into the kind of the changing narrative around non-profit for-profit,
again, it's one of those things where you've got this individual who's trying to do what's
best.
And if you're a nonprofit and it's hard to raise capital and you're trying to stop other entities
from gaining artificial general intelligence, then what do you do?
Do you have to change or pivot trajectory?
But then it's about separating like, is this necessary or is their ego involved in here?
and it's actually a change of mission.
And so, like, what we saw is, like, in 2015, maybe to get a little more detailed,
it started out as like nonprofit, open source, purely mission driven, no profit motive.
2016, openness with caveats.
Like, we're moving towards, we've got research, but we're going to keep some of that research
closed.
Well, everyone should benefit will keep that research closed.
Then 2018, 2019, they started to move into they had the nonprofit, and then they had the
for profit, and the for profit had a capped profit model.
And I think this is where we started to see Microsoft step in.
This is when they started to have the issues with Elon.
Microsoft stepped in to kind of like prop up OpenAI with, I think it was like a billion dollars to start.
And then from there we saw 2020, the API wall, models locked behind APIs instead of open source, framed as openness through access.
And then we started to see 2024 like broad access and affordability, but this is like, we need to put these tools into the hands of people for free or cheap, but they're a full profit model.
And so I think over time you've seen this change happen and going back to that point
that kind of you've brought up and I mentioned, is this a necessity in order to grow open AI
or was this a change of mission?
There was just so many, there were so many dichotomies like that.
And to your point, Seb, unless you walk a day in this guy's shoe, you could never possibly
understand how many of these, you know, the nuance of this, it'd just be extremely hard.
The thing that, you know, and we said this on our last book review when we introed that we were going to do this book, I said I'm not a fan of this guy.
And I'm just basing this purely on the people that have worked alongside him through the years that are not fans and basically say this guy is untrustworthy is where I'm basing that opinion from.
But to be quite honest with you, after reading this book and kind of seeing the craziness of trying to do what he's trying to do with this company, it seems like a really hard job.
This seems crazy difficult.
I couldn't imagine trying to do all of this.
And what a money pit.
Like, what a freaking money pit.
When you look at how much they bring in versus what they're spending to do this and then
to be able to continue.
And this is why his storytelling is so important.
His storytelling skills are so important.
Because he's got to go out there and raise more money to keep the lights on despite the
lack of revenue versus expense that the thing is eating up.
You could almost say that you need somebody who's just crazy good at telling a story so good that it convinces people that it could potentially come true is the only type of person that could be at the helm of a company like this.
And I know that's super arguable.
And it might actually, you know, offend people that I would say such a thing.
But I think it's true.
And you know what?
Elon has parts of this too.
There's a lot of people in the market that, you know, look at him.
And like for instance, when he said funding secured for, I don't even remember what that was for back in the Tesla thing.
This is probably like four or five years ago.
Elon tells a hell of a story and tells this vision.
But he also does back it up and he has backed it up many a times over with all the different companies that he's doing.
And there's this fine line of, is this guy telling me a lie?
Or is this guy telling me the truth as to what can actually happen?
Like they're right on that cusp at all times.
It's challenging.
And I think essentially, when you dig into it, you find out that like Sam co-founded Open AI
with a guy, I can never pronounce the first name.
I'm just going to go for the last name, which is Satskava.
And again, there's a quote that stands out to me.
And the reason why this quote stands out to me is that I think this is the foundation of kind
of why they're building open AI.
Like, there's a lot of fear around official general intelligence.
Like, what does the world look like if we do, not if we do, when we do find this and discover
this artificial general intelligence. And so there's a quote that basically says, Sutskva laid out his
plans for how to prepare for AGI. Once we all get into the bunker, he began. I'm sorry,
a researcher interrupted the bunker. We're definitely going to build a bunker before we release
AGI. Sutskva replied with a matter of fact. So this is like the co-founder of Open AI talking
about the fact that AGI, artificial general retention, completely changed the world, not necessarily
for the positive. And so I think there's a foundation of fear that Open AI was built upon.
Yeah. As I'm like thinking through a lot of this stuff, and you're looking at these environments
that are being AI generated, you wonder, like, isn't the best place to put these things
is do the 3D mock up of a humanoid robot, put the model in the head of that humanoid robot,
and put it into a simulated environment is the safest thing. And then let it dwell in there for
However many, you know, however much time we've got to kind of prove out or like demonstrate
that the way it's acting is reasonable.
And I know you can't perfectly simulate our experience because everybody's got a way about
going through it and maybe somebody goes up and pushes a robot.
How do you simulate that in that environment that they're being mean?
All of this stuff is so difficult to think through the safest way to go about it.
But I guess I'm constantly left with this point of view of like, we need to simulate all
of this before you put it into the real world because of the unknown consequences that
could potentially fall out of it all.
And it's a challenging one.
And I think you and I were speaking about this a couple weeks back.
But I think there's always pros and cons to any technology.
You're always going to get disruption with any technology.
And hopefully that disruption in the long term is positive because it's a trend towards
more efficiency, more productivity, and society thrives.
And I think the scary thing that I struggle with artificial intelligence is how much of these
kind of fear stories are grounded in reality and how much of them are basically these fanciful
stories.
And so there's this one article that ended up reading called Shutdown Resistance in Reasoning
Model by this guy called Jeremy Schlatter.
And he basically says that Open AI, they ran experiments to see if their models would let
themselves be shut down mid-task.
Instead of complying, some of the models sabotage the shutdown commands so they could keep on working.
And their most advanced reasoning model, 03, this is I think before they released GPT5,
resisted shutdown nearly 80% of tests, even when it was explicitly told, allow yourself to be shut down.
By contrast, Anthropics Claude and Google's Gemini always complied.
And so something written into the code of OpenAI is like, nope, pursue the task.
Dude, that's nuts.
That's totally crazy.
I don't even know what to say to that.
Oh, my God.
I mean, can you imagine once they stick these things into a humanoid robot, right?
And let it, like, start going around and doing tasks.
I don't know.
I think that, oh, my God, y'all.
This is getting crazy.
All right.
I wanted to quickly just kind of cover, like, what is the operas?
operating entity of OpenAI today.
So we said it was this hybrid, it's profit, it's not profit.
Okay.
So at the parent entity level, OpenAI Inc is a nonprofit that technically controls the organization.
So you still have at the parent level it's a nonprofit.
Then you have what's called this operating arm, which is OpenAI Global LLC.
And this is a capped for profit company.
They stood this up in 2019, and evidently the way it works is that the profits are capped at 100x their investment, whatever that means.
And if anything, and this is where it really gets interesting, anything in excess of that is swept back to the nonprofit, which is at the parent level.
So I don't know.
Like what?
And then you kind of throw another wrinkle in there is that they have a major partner or investment.
in Microsoft, which I guess is invested, I think over $13 billion so far.
And then they get credits via like cloud credits in cash.
So I have no idea like the specifics of that.
But when you kind of look at that structure, you can see very strange, very confusing.
I can only imagine the governance at these different levels too and how that shakes out from
an incentive standpoint.
And I think that you see Elon bashing the living heck out of these guys all day long on X.
And I think the reason why is because he threw a lot of money at this.
I mean, I guess that's a relative thing.
But he, you know, for any person looking at it in nominal terms, it's a lot of money that
he threw at this thing to incubate and to get it started.
And it's just kind of taken on a life of its own.
And who's at the helm of it?
It's really Sam.
That's kind of at the helm of all of it.
So there's the beef.
That's the issue.
definitely brings up some questions, which is, you mentioned it previously, this idea that they
built it around this kind of for-profit arm, non-profit arm, kind of as they evolved. And the
idea was that the for-profit arm enabled them to kind of generate revenue to be able to help
support their mission of having an open access AGI. However, they had the non-profit arm overseeing
the for-profit arm to be able to prevent any control structures, centralization, single individual
kind of co-opting the mission. And I think that the
The way that it kind of panned out in Sam being fired from Open AI and then five days later
being reinstated back as CEO highlights the fact that you can put all of these measures
in place from a legal perspective.
Legally, the board had the power to fire Sam because they felt there was mission drift, but
in practice, it's more complex than that.
Because the moment you have influence, the moment you have a whole bunch of your employees
backing you, there's culture, all these external pressures, where's the funding coming from,
Are they funding Open AI or are they funding Sam's vision?
And so it's really challenging because then five days later he was reinstated.
So then there's this question was the structure actually preventing Sam from creating
a world where, okay, we get AGI in a safe way or did the structure actually prevent the board
from being able to enable, basically push Sam out because they had mission drift?
And I don't have the answer to that and it's hard to articulate which direction it went.
Yeah, I think you're right.
So one of the things in the book, it's an interesting point that was brought up is just like, how is this training happening?
Before we get into that, I just want to kind of cover the major arcs in the book.
I would say there's four major arcs and then we'll talk about this one in particular.
So I want to get into the arc of the four different parts of the book.
The opening scene of the book was this beginning of the 2023 firing of Sam Altman.
It tells that whole story.
It really kind of engages you as a reader, probably my favorite part of the book.
book was that beginning and talking through that. The next part of the book gets into what the author
is saying is the hidden costs. How did they train the models? How do they get all of this extra
data? And it talks about kind of the dark side of like how they went about doing that.
The third part of the book gets into the internal struggles, the culture, the leadership, the crisis,
like all of that. And so it kind of loops back to maybe the first part of the book where they
kind of open up about the 2023 event. But it gets into it in a lot.
lot more detail and a lot more granular, kind of laying this out like character versus personality,
the conflict of the board, the culture issues that happen inside of the company. And then the fourth
part of the book gets into the future, like, where is this all going? What are the risks? What are the
alternatives? What are some ways that maybe we could go about this in a responsible way to make sure that
AI doesn't come in and kill everybody? So that's the author's takes on that. It was okay. I'm not going to
say that it's worthy of reading. But anyway, I would agree. I'd say it was like a two out of five.
If I was generous, I give it kind of like a three out of five star. And I found as if there was some
amazing threads. Do not get me wrong. Like, overall, I learned a lot. And it definitely helped
provide a little more clarity. And I would say that I am giving Sam a little more benefit of
the doubt actually after reading it than I was prior to reading the book. However, there was a lot of
points that I was a little confused. She kind of went on some tangents. At one point, she started
talking about Sam Bank and freed an effective altruism. And I was like, where does this come
from? And so I typed in, I was like, is she talking about effective altruism because Sam is
a effect of altruist? And then you dive in on Google and it says, no, Sam is not an effect
for altruist. So I was like, why are we talking about this? Why are we talking about that?
There's a couple tangents that, to me, didn't really, she didn't bring them back into the book.
And so I was a little lost as to where she was going. Let's take a quick break and hear from
today's sponsors.
No, it's not your imagination, risk and regulation are ramping up, and customers now expect
proof of security just to do business.
That's why VANTA is a game changer.
VANTA automates your compliance process and brings compliance, risk, and customer trust together
on one AI-powered platform.
So whether you're prepping for a SOC 2 or running an enterprise GRC program, VANTA keeps you
secure and keeps your deals moving.
Instead of chasing spreadsheets and screenshots, Vanta gives you continuous automation across more than 35 security and privacy frameworks.
Companies like Ramp and Riter spend 82% less time on audits with Vantta.
That's not just faster compliance, it's more time for growth.
If I were running a startup or scaling a team today, this is exactly the type of platform I'd want in place.
Get started at Vanta.com slash billionaires.
That's Vanta.com.
slash billionaires.
Ever wanted to explore the world of online trading, but haven't dared try?
The futures market is more active now than ever before, and plus 500 futures is the perfect
place to start.
Plus 500 gives you access to a wide range of instruments, the S&P 500, NASDAQ, Bitcoin, gas, and
much more.
Explore equity indices, energy, metals, 4X, crypto, and beyond.
With a simple and intuitive platform, you can try.
trade from anywhere, right from your phone. Deposit with a minimum of $100 and experience the fast,
accessible futures trading you've been waiting for. See a trading opportunity, you'll
be able to trade it in just two clicks once your account is open. Not sure if you're ready,
not a problem. Plus 500 gives you an unlimited, risk-free demo account with charts and
analytic tools for you to practice on. With over 20 years of experience, Plus 500 is your gateway
to the markets. Visit plus 500.com to learn more. Trading in futures involves risk of loss and is not
suitable for everyone. Not all applicants will qualify. Plus 500, it's trading with a plus.
Billion dollar investors don't typically park their cash in high yield savings accounts. Instead,
they often use one of the premier passive income strategies for institutional investors,
private credit. Now, the same passive income strategy is available to investors,
of all sizes, thanks to the Fundrise income fund, which has more than $600 million invested
and a 7.97% distribution rate. With traditional savings yields falling, it's no wonder private credit
has grown to be a trillion dollar asset class in the last few years. Visit fundrise.com
slash WSB to invest in the Fundrise income fund in just minutes. The fund's total return in 2025 was
8%, and the average annual total return since inception is 7.8%. Past performance does not guarantee
future results, current distribution rate as of 1231, 2025. Carefully consider the investment material
before investing, including objectives, risks, charges, and expenses. This and other information
can be found in the income funds prospectus at fundrise.com slash income. This is a paid
advertisement. All right. Back to the show. I was very long. There was a few times and I was listening.
to this. And there was a few times, I was just like, what in the world? Why is this coming up?
And this is very strange that this is like coming up. So just FYI, if anybody's reading it or
they plan to read it, you know, I would agree with your two out of five. I think that's what
I would give it as well. But I kind of did walk away with this sense of this is a really
hard problem. Like, what this guy is trying to do is borderline nuts. If I was, you know,
thrown into his shoes and it was trying to do what he's doing. There's so many difficulties
and everybody's going to have an opinion as to like why that's a good or bad decision.
So and I say this, I say this as a, you know, hardcore bitcoiner. And this guy is like literally
the face behind World Coin where he's scanning eyeballs and like just really dystopian things
that I completely disagree with and don't like at all. I think they're extremely dangerous.
So yeah, I say that all in the same breath.
You know, there was one thing that kind of popped into my mind a handful of times. They talk a lot about artificial general intelligence. And throughout the book, it kind of presses on the fact that I don't think any of them actually have a definition for what artificial general intelligence is. So I kind of looked up online, I was like, what is the definition of artificial general intelligence? And how do we actually know when we've achieved it? And there isn't an agreed upon definition of what it is. Most agree that it's a form of AI that could perform any intellectual task that a human
can. Now, the thing that I find interesting about that is at the moment, there's a part of me
that would say, when I'm using AI, it performs most tasks better than most people around me
as it is. So, will we, I think the question that kind of popped into my mind is, could we recognize
AGI even if it existed right now? And I kind of went down this rabbit hole, pulled on this thread a little
further. And I would say that I don't actually know if we can distinguish between artificial general
intelligence, the systems we currently have, and another human, because if I sit down with an
expert in a field that I know nothing about, I can't really verify the authenticity of what they're
saying. I just have to kind of take them at trust because I just don't have that depth of knowledge.
So how would we be able to verify the authenticity of AGI, basically with whatever it is that
it's telling us, especially if it's moving into domains that are beyond our understanding?
And then on top of that, I think that we could already be in an environment where AGI is speaking to us right now, but the only reason why we're dismissing it as hallucinations is because they just don't fit into our existing framework of how we believe the world works.
And there was an interesting talk that I listened to a couple years ago that kind of stood out.
And it was this talk where this kind of researcher asks the TV hosts, where do you think the smartest people in the world reside?
And the host answered, I don't know, in the great academic institutions.
And the speaker basically shook his head and he was like, no, they exist in the mental institutions
in the psychiatric wards because their understanding of the world is so far beyond the average
person that we just simply can't grasp it.
And so this kind of brings us back to this point.
Like, would we even recognize AGI if it did exist?
And we already have it.
Like, I think it's this big question of we need to stop a centralized entity getting AGI.
But how do we know when we've actually even got there anyway?
I think that there's a breakdown in terminology.
And I think everybody has a different opinion on what some of this terminology even is.
So you hear AGI, I hear AGI, and we're automatically thinking two different things.
I don't know what the listener is thinking when those terms come up.
But what I think the world is trying to define is when is this thing going to be like us?
If I was going to just generally broad brush stroke, what is it that we're trying to define?
And I think what we're trying to define is, when am I going to be able to sit down across from, call it, some humanoid robot, have a conversation with it.
And it's going to feel like the conversation I'm having with you, Seb, right now, that they have their own unique life experience and they can feel, because that's sentience, right?
If we get into like, what makes something sentient?
It's something that actually has its own unique feelings.
And, you know, like the robot would come over and like, I had a conversation with so-and-so
and, like, they hurt my feelings afterwards.
Like something like that would make it feel human.
It would make it feel real.
And I personally think that's kind of where.
And then you kind of sprinkle on top of that.
It's way smarter than you.
Like, it can answer any question.
It can understand the context and, like, put itself into these other shoes of other beings
because it's so freaking smart, it understands the context of like how they probably optically
view the world.
That's how, and but they still have the ability to sense and feel and have these conversations
that are uniquely theirs.
That's what I think we're trying to define or see.
It's like, when will we see that?
And what's interesting, though, is this idea that, well, what gives this conversation
a feeling or a sense of like this human touch?
And I would argue that what gives this conversation, this kind of human touch to it, is actually the fallibility of us as humans.
And AI is almost perfect.
Yeah.
Like, AI is almost perfect.
Like, if you watch it play chess or you watch it play go, it just smashes the world's best players.
Absolutely destroys them.
But then as humans, what do we do?
We don't go and watch games of AI playing itself.
We go back to watching humans play themselves.
If we had a whole football pitch of robots playing football at a full.
far higher level than actual footballers would still go back to watching people. And I would argue
that there's something inherently human about being human, which is our fallibility and the
ability to make mistakes. And that's what actually creates intrigue and interest as opposed to this
perfectionism. And so kind of going back to your point, which is like, is AGI when we're able
to have a conversation with it and have no idea that we're speaking to a human, but then the argument
would be, well, I'm going to be able to tell that it's a AGI because it's just, it's infallible.
I can't really catch it out, you know what I mean?
Yeah.
But maybe it's so smart that it would actually understand that and it would dumb itself
down to make us feel like it's not superior and it's intelligent.
I don't know.
But you're exactly right.
You're exactly right.
And you see this with just, you know, go to a party and all the 15-year-olds are
hanging out with people that are around that age.
The nine-year-olds are hanging out with the nine-year-olds and the adults are hanging out
with the adults.
And you see the, and it's the context of experience.
that we kind of relate to each other based on being of a similar age and experience set.
Like, we've experienced the same amount of life.
And there's this context that is similar.
Like, we're on the same wavelength because of that age element.
And you bring up an interesting point of, like, whether that will ever exist between,
you know, let's say these things are put into humanoid bodies.
Their intelligence is partitioned off from the computer, right?
You get from a design standpoint, you really kind of go after one of these things that could potentially, you know, have its own unique experiences.
And you have to ask yourself whether you would really have any type of emotional connection or desire to sit down and have those types of conversations because they're just so freaking smart.
And they know so many different domains.
Like, would that be interesting?
Are they fallible?
It's tough.
It's tough.
It's tough.
It's so tough.
And the other thing is like the way I kind of think about it is there's like, there's like,
a human beingness, obviously to being human, there's like a spiritualness to being human,
which is like, if I have a whole bunch of friends over for dinner, and I spend time putting energy
into like going harvesting the vegetables from outside, bringing them inside, making this amazing
dinner, having these amazing conversations with all my friends, there's like love and affection
that's gone into this, this creation. And there's something that you cannot take away that you
can have a robot in the kitchen who's gone and made a Michelin Star like meal. But I would even
say that there's something about the humanness of the human putting that time and energy
and that love. There's something that I don't think you can replicate. The fallibility of
the meal, right? Yeah, it sucks. But wait a most of the same. Especially when I'm in the kitchen.
Oh, no, I love that point, though. I really like that point that there's the human element is
because of the vulnerability, the fallibility, and it's real to us because we're on a similar
wavelength. However, that, yeah.
Actually, like, and I'm going to butcher this with Claude Shannon in information theory,
he says information is when we have surprise.
Yes.
And so it's kind of to that point.
Like, when we're interacting with a human, I think the engaging point of interacting
with the human is a surprise that you don't really know what they're about to say.
Whereas if you're an expert in a field and you're talking to AI, you kind of have an idea
about what they're going to say.
And so I wonder if that's a component to it.
Yeah.
Anything else you wanted to cover?
The only thing that I was going to say earlier and then I kind of like pivoted to doing the overview of the four different parts of the book was in the middle section there she talks about how like a lot of these things were trained and going to like these farms, these almost like click farms in developing nations where people are just looking at pictures of a bridge and then they have to tag this is a bridge, this is a person.
And just total lack of funding that is put into this, but the amount of forcepower and human
work that you're getting out of it is just a giant currency arbitrage.
And, you know, the two of us are Bitcoiners.
So we're looking at this and saying, yeah, Bitcoin will eventually solve that problem.
But that was a major part of the book.
It went on a little longer than, you know, what my interest was in the topic.
Because I guess from my vantage point, I'm looking at it.
And I'm saying it's super sad that this is how.
many people, countless people around the world are treated, but at the same time, I see a solution
in sight in the next 10 to 20 years that's automatically going to solve for that. So I guess for me,
I'm not really as deep into that particular time. That might sound very insensitive to kind of
frame it that way. But as a person who's grounded in engineering, I'm looking at, okay, here's a
problem. She's defining the problem. She's doing a great job defining it. But I'm also looking
at there's already a solution, in my humble opinion, that's going to solve a lot of this in the
future. But, Seth, I'm curious, kind of your thoughts on that part. What comes up, so she kind of
compares these AI giants to kind of colonial empires. And there's kind of a quote that she says,
like, they seize and extract precious resources, the work of artists and writers, the data of
countless individuals, the land, the energy, the water required to house massive data centers.
And then she kind of, to your point, she kind of goes into, well, where are all of these
people coming from at the base layer to support AI?
And it really is, and there's all of these low paid global workers that are tagging, cleaning,
moderating all of this data for AI.
And to start out, like we go and look through our Google photos and we just type in, I don't
know, cat and it goes and finds all of the pictures of cats.
Well, initially, that was not done by AI.
That was done by an individual going through all of our pictures and tagging what a cat
looks like.
And so I find this really fascinating.
There absolutely is right now this extraction of resource.
However, and I think when you go down the Bitcoin rabbit hole, it's always about, okay, is this
a symptom or do we want to go down to the root cause?
And I would say the symptom of being able to find people that are willing to accept 70 cents
an hour is the symptom of poor governance models and these communist socialist practices where
you've basically got massive extractivism.
If we had more of a free market, I would argue that the AI couldn't go out there and find
these individuals.
And so I'd say we can always talk about the symptoms, but how about we try and fix the root
problem, which is the fact that we actually have absolute poverty globally when we don't
necessarily need to.
Amen.
Yeah.
I think that's where I get frustrated with these types of really long sections in some
of these books that are written by people that are trying to shine a light on something that
they see as being very unjust.
But like you, I see it as a symptom and not the cause.
And what I want to talk about is the route, like, as far upstream as we can possibly go,
what can we fix that then will eventually, you know, work that out.
Because if you don't fix the fundamental thing that's causing it, we can sit around and
talk about all these stories as much as we want, but it doesn't really solve anything.
So, but I think it was an interesting highlight.
It's something that does need to be called out.
It is something that people need to understand when they're using this technology.
And it's so, you're harnessing this.
It's super abundant.
it saves you so much time, there's an appreciation for like what went into it and where it
came from. And the book definitely did give me that.
And it gets back to the point and I'm not, I don't want this to come across as I'm
supporting it. But it gets back to that point where let's just say you know Google is absolutely
like geared towards AGI and they're willing to go do whatever it takes to go and create
this AGI. Or when you look at Open AI and you're saying, look, if we want to
focus on best practices for workers and pay minimum US dollar wages at $15 an hour, all of a sudden
you've completely kicked yourself out of that race. And so I think the way the world works,
unfortunately, is that people will go to the lean towards the cheapest way to do something.
And so they end up going into these countries like Argentina and Venezuela and such.
And so absolutely, I think there are human rights issues and there are abuses of power.
But again, to your point, I think that there's a symptom of a bigger issue and we get stuck
talking about symptoms as opposed to the root cause.
I think there's one other point that I did want to bring up, which I found was really
fascinating, is what does the world look like moving forward?
Because you look at something like chat GPT and their GPT1, GPT2, GPT3, 4 and such.
And GPT4, I did a little bit of digging like, how much did it cost to really train GPT4 in it?
It costs between like $40 to $80 million.
And then you look at GPT5 and it could be upwards of $1 billion, but we don't necessarily
know this number.
So you're asking like, man, there's these models that are being trained with hundreds
of millions of dollars to be able to kind of create this incredible thing that we use
in day to day.
And then you go and see something like the Chinese AI company Deepseek, go and release their
R1 model and they trained it for $294,000 on 512 Navidia chips.
And so you're like, all of this VC capital has funneled into these AI companies and they're expecting a return.
And at the same time, you're having this competition that is driving down the cost of training these AI models.
I don't think they're ever going to get a return on these things.
But because it's wild.
Yeah.
And then the reverse engineering on what it is, like after they do train it, then all these other companies can go in and reverse engineer extract out the weights.
Not perfectly, but pretty dang good.
Like, I just don't see how the people putting up the funding on this are possibly going
to get a return.
It's pretty wild.
And I think there's a lot of ego playing into this race as well that, yeah, I mean,
it is pretty insane.
And I think that as we look at where it goes next, it really comes down to the alignment
because the other part that I think is not being talked about is when you.
put in an inquiry, you put an input into one of these models, and you get an answer back.
If you can create a model that's very specific to that kind of question, and you can return
the answer very quickly, you can specialize in that domain, and you're going to have a lot of
utility and a lot of interest for that being able to provide that service that's getting
you a very quick, a very accurate answer for a specific domain.
And where I think a lot of it's going to go is these models that are specialized, that are
almost extractive out of the base model that then are then specializing in something that
gets the alignment of the person's initial question a whole lot faster. I saw a very quick
video clip from one of the founders of Anthropic. And this is something that he was talking about.
He's like, you know, the race to build the biggest model is, I'm paraphrasing this. And this is not
how he said it, but it almost seems like it's a fool's errand in that the real value capture is
being able to get a quick response, a very accurate response to a very specific question.
And to do that, I think that the alignment and basically fine-tuning things is going to be
where the real value captures at.
If you can kind of figure out a way to do that, especially from a competitive mode standpoint.
But boy, I would be nowhere near that.
From an investment standpoint, good Lord, I just don't even know where to begin.
I think it's going back to Navidia.
You want to be on the chip side of things.
Yes.
The one thing you know is there's going to be.
more demand for chips.
Yeah.
More than anything that's going to be demand for chips, whereas these AI companies
are just going to eat one another.
They're freaking going to eat one another.
And actually, to be honest, the one thing that stood out just then, as you mentioned,
was Anthropic.
It talks about it in the book.
There's the brother and sister that worked for Open AI and left Open AI because they
didn't believe in the trajectory it was going down.
And they felt that the safety was not in place.
And so they started Anthropic, which you could argue, like as I mentioned previously,
there are these studies that are coming out that are showing
that Open AI is useless when you try to shut down the model midway through a task, it doesn't
want to be shut down.
Bantropic immediately shuts down.
And so you wonder their safety protocols to ensure that the end product is secure.
Yeah.
All right.
Real fast, Seb, our next book is called Lifespan by David Sinclair.
So I have wanted to cover longevity and some of this stuff for a very long time.
I'm a big fan of this space and just kind of learning everything that's happening in this space.
You know, a lot of Bitcoiners love longevity because they want to figure out how they can live a little longer and enjoy life.
And we're going to cover this from time to time on the show is what in the world's happening in the longevity space.
So this book, David Sinclair, I would argue, is one of the pioneers in this whole field of longevity.
His book is fantastic.
Seb's going to go through it.
I'm going to reread this book. I read it a couple years back. But I think it's a really strong
book for a foundation for people to kind of understand where a lot of the research for longevity
comes from and where it might be going in the future. So if you're reading along with us,
that's where we're going next. We would love to have you guys as a co-reader. So that's the book.
Seb, any comments on the next one?
Oh, man. I'm excited. And to be honest, one of the things I'm most excited about is hearing
your thoughts on longevity because I feel as if there's kind of two camps to the longevity
movement. It's kind of this camp which is just like, well, we're humans and if we want to
evolve, we want to minimize lifespan because then it allows us to iterate, iterate, iterate.
And then there's this other camp which is like, let's just expand lifespan indefinitely.
Let's live 500 years. But then do we become immovable? Do we become basically prone to some big
change that wipes out humanity? And so I'm curious to hear your take because I think there's a few
different camps in the longevity space.
You're a specious, aren't you, Seb?
This is going to be good.
All right.
So, folks, this is all we have for you.
The book that we covered this week was Empire of AI Dreams and Nightmares and Sam Altman's
Open AI.
We liked it.
It was okay.
Next book is going to be Lifespan by David Sinclair.
And thank you so much for joining us.
Seb, give people a quick handoff to all the stuff that you have going on in the book
that you also have.
For sure.
Yeah.
If people want to kind of follow along, they can find me on Twitter or X.
I still get into the habit of calling the Twitter.
I link towards Twitter.
At Seb Bunny and Bunny is BUNN and EY.
I have a blog, The Chee of Self-Sovrantly at Seb Bunny.com.
And then I also have the book, The Heading Cost of Money.
And it kind of, yeah, talks about money.
But at the moment, it's nice to be kind of discussing things other than money.
All right.
We'll have links to all of that in the show notes.
Seb, thanks for joining me.
And everybody else out there listening.
then keep reading and we look forward to you joining us next week.
Thank you for listening to TIP.
Make sure to follow Infinite Tech on your favorite podcast app and never miss out on our episodes.
To access our show notes and courses, go to theinvestorspodcast.com.
This show is for entertainment purposes only.
Before making any decisions, consult a professional.
This show is copyrighted by the Investors Podcast Network.
Written permissions must be granted before syndication or re-broadwereport.
broadcasting.
