The Knowledge Project with Shane Parrish - Josh Wolfe: Human Advantage in the World of AI
Episode Date: March 4, 2025While Silicon Valley chases unicorns, Josh Wolfe hunts for something far more elusive: scientific breakthroughs that could change civilization. As co-founder and managing partner of Lux Capital, he's ...looking for the kind of science that turns impossible into inevitable. Josh doesn’t just invest in the future—he sees it coming before almost anyone else. In this conversation, we explore: The rapid evolution of AI and potential bottlenecks slowing its growth The geopolitical battle for technological dominance and rise of sovereign AI models How advances in automation, robotics, and defence are shifting global power dynamics Josh's unfiltered thoughts on Tesla and Elon Musk AI's revolution of medical research Parenting in a tech-dominated world How AI is forcing us to rethink creativity, intellectual property, and human intelligence itself Why the greatest risk isn't AI itself—but our ability to separate truth from noise Despite the challenges ahead, Josh remains profoundly optimistic about human potential. He believes technology isn't replacing what makes us human—it's amplifying it. This episode will challenge how you think about innovation, risk, and the forces shaping our future. If you want to stay ahead of the curve, you can't afford to miss it. Josh Wolfe co-founded Lux Capital to support scientists and entrepreneurs who pursue counter-conventional solutions to the most vexing puzzles of our time. He previously worked in investment banking at Salomon Smith Barney and in capital markets at Merrill Lynch. Josh is a columnist with Forbes and Editor for the Forbes/Wolfe Emerging Tech Report. (00:00:00) Introduction (00:02:46) Current Obsessions (00:05:11) AI and its Limitations (00:10:58) Memory Players in AI (00:13:27) Human Intelligence as a Limiting Factor (00:15:38) Disruption in Elite Professions (00:17:15) AI and Blue-Collar Jobs (00:18:29) Implications of AI in Coding (00:19:40) AI and Company Margins (00:25:48) AI in Pharma (00:26:44) AI in Entertainment (00:28:04) AI in Scientific Research (00:33:31) AI in Patent Creation (00:34:49) AI in Company Creation (00:35:33) Discussion on Tesla and Elon Musk (00:40:54) AI in Investment Decisions (00:42:20) AI in Analyzing Business Fundamentals (00:45:27) AI, Privacy, and Information Gods (00:53:04) AI and Art (00:56:43) AI and Human Connection (00:58:22) AI, Aging, and Memory (01:00:46) The Impact of Remote Work on Social Dynamics (01:03:18) The Role of Community and Belonging (01:05:44) The Pursuit of Longevity (01:11:58) The Importance of Family and Purpose (01:14:18) Information Processing and Workflow (01:26:03) Investment in Military Technology (01:28:09) Global Conflict and Military Deterrence (01:31:28) Information Warfare (01:32:32) Infiltration and Weaponization of Systems (01:37:06) Infrastructure Maintenance and Growth (01:38:27) DOGE Initiative (01:40:09) Attracting Capital and Global Competitiveness (01:43:16) Attracting Talent and Immigration (01:45:42) Designing a System from Scratch (01:47:30) AI and Intellectual Property (01:51:56) The Fear of AI Newsletter - The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it’s completely free. Learn more and sign up at fs.blog/newsletter Upgrade — If you want to hear my thoughts and reflections at the end of the episode, join our membership: fs.blog/membership and get your own private feed. Watch on YouTube: @tkppodcast Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Look at AI right now, billions of dollars being spent.
The flurry of all of this benefits us as consumers, always and everywhere.
Just wait, and they'll compete and compete and it will accrue to us as users.
How do we create an unfair advantage in a world of AI?
I think the limits to human intelligence are rooted in our biology.
AIs over time will understand us in many ways better than we understand ourselves.
We can still beat computers and chess.
Done.
We can still beat them in Go, done.
We can be in video games, done.
Okay, but we still have creativity, done.
All these things have been trained on the sum total of all human creation.
And now they're being trained on the sum total of human creation plus artificial creation.
I'm absolutely convinced that we are going to have machines doing science 24-7.
I just think it's going to be part of the total overture over of creation.
And I think it's a beautiful thing.
Welcome to the Knowledge Project.
I'm your host, Shane Parrish.
In a world where knowledge is power,
this podcast is your toolkit for mastering the best
what other people have already figured out.
If you want to take your learning the next level,
consider joining our membership program at fs.blog slash membership.
As a member, you'll get my personal.
reflections at the end of every episode, early access to episodes, no ads, including this,
exclusive content, hand-edited transcripts, and so much more. Check out the link in the show
notes for more. While others ask what's trending, Josh Wolfe asks, what seems impossible
today? He's built a career betting on scientific breakthroughs that most people don't believe can
happen. As co-founder of Lux Capital, he's back companies cleaning up nuclear waste and building
brain-computer interfaces, but here's the contradiction. Despite investing in technology that
could make humans obsolete, Josh is profoundly optimistic about human potential. His thinking challenges
conventional wisdom. While most see AI as automation and threats to humanity, Josh sees it
as a catalyst for human achievement. In this conversation, we explore this paradox, diving deep
into how technological evolution can amplify rather than diminish what makes us human.
From geopolitical power shifts to the future of human creativity, Josh reveals the exact
frameworks he uses for seeing what others miss and betting on the seemingly impossible future.
It's time to listen and learn.
Start with what you're obsessed with today. What's on your mind?
Well, first and foremost, kids and family.
be a good dad, good husband, technologically obsessed with so many different things. We were just
in a partnership meeting. And probably the most interesting thing at the moment is thinking about the
speed of certain technologies, like the actual physical technologies and where the bottlenecks are.
So in biology, you've got all kinds of reactions, nature figured out, evolution figured out, enzymes
and catalysts and things that can speed up reactions. But you can't move faster than the speed of
biology. Now you think about AI, total different field.
but the same sort of underlying philosophical principle.
If you've tried ChatsyPT operator,
it can only move at the speed of the web.
And at that it's sort of with latency a little bit slow.
So we're thinking about what are the technologies
that can accelerate these things
that have these natural almost physics limits
and even if those limits are biological or digital?
So that is something that at the moment I'm obsessed with
in part because I have ignorance about it
And when I have ignorance about it, my competitive spirit says, how do I get smart about this?
How do I get some differentiated insight?
How do I know what everybody else thinks and then find the sort of white space answer?
So that's one big thing.
Obsessed with geopolitics.
It is the great game.
Everything from U.S.-China competition to Iran, Israel, access of religious conflict, the Sahel and Maghreb in Africa,
which is not an area that a lot of people talk about or think about.
that I believe with low probability, super high magnitude import
is going to become the next Afghanistan,
that that region in the Sahel,
all these coups and cascades of these coups of failed states
where you have Russian mercenaries, violent extremists,
Chinese CCP coming in with infrastructure and influence,
European colonial powers being kicked out,
against this backdrop of brilliant scientists and engineers
and technologists that went to HBS or Stan,
Stanford and worked at META and Google and are going back, and particularly in like Ghana and Kenya and Nigeria, building businesses.
So that continent is going to be a truly contested space for future in progress and for utter chaos and terrorism.
So, yeah, it's a widespread of stuff that I'm probably currently obsessed with.
Let's talk about all those things.
Let's start with sort of diving in.
You mentioned chat, GPT operator.
Yes.
And the limitations sort of being we're moving at the speed of the web.
At what point do you think that we're, those systems are all designed for humans?
At what point do they become designed for AI first and then humans are using them?
Or do we just have two simultaneous interfaces?
I think it's going to be both.
I think that there's always this like ignorance arbitrage where somebody figures out that there's an opportunity to take advantage and improve a system while people don't understand it.
And so I think that there are people that are probably going to launch redesign businesses where they say we will optimize your web pages just like people did for certain.
engine optimization when Google rose that you had to be more relevant because Google was so
important. Google was influencing whether or not people would see you or discover you. And so if there
are certain tasks like open table or restaurants or shopping, they are now going to start to shift
their user interface, not just for the clicks of a human, but for the clicks of an AI that's doing
research on the best product. And the way that those are going to negotiate and in some cases
influence or trick, the judgment and the reasoning of the AI is really.
interesting. So I think that that is probably the next domain where those things are going to get
better, faster, they're going to have more compute, but then people are going to be redesigning
some of these experiences, not for us and the user interface of humans, but machine to machine.
And you're already starting to see this where there was one element of like an instantiation
of R1 communicating with another in R1, but the language that they were communicating was not English
or Chinese or even traditional code. It was like this weird, almost alien language. And so I think
you can see a bunch of that. There's another adjacent theme, which I think is also really
interesting, which is AI portends the death of APIs. So APIs allow meta, with their
meta glasses, Orion, to be able to communicate with Uber and Spotify through the backend, through
software. But increasingly, those things are complicated. They're hard to negotiate. There's a lot of
legal. There's API calls. There's restrictions. There's privacy. There's controls. But if you're
one of these companies that are like, I don't want to go through all of that. Can I just use AI
to pretend that I'm a user? And in fact, I had this experience where I was using operator and I had
this moral dilemma for a split second. Do I click to confirm that I'm not a bot because I had to
take control over it because the bot actually was trying to do my research on behalf of me? And so
you see a world where APIs that have been the plumbing of everything in SaaS and software and
negotiating behind the scenes may start to lose influence and power to AIs that are able to basically
just negotiate on the front end as though they were a user. So I think that that whole domain
is going to very rapidly evolve in like a quarter or two. You mentioned sort of the limitations
like biology moves at a certain speed. There's a couple subset of questions here, but one is
where's the limitation in AI growth right now? It seems like we have energy as a key input. We have
compute as a key input and then we have data slash algorithms as like the next key input. What
am I missing? What's the limitation on those? Well, start with conventional wisdom, which has
heretofore been correct, which is you need more compute, you need more capital, scaling laws for
AI, just throw more GPUs and processors and money and ultimately energy to support that and you will
get better and better breakthroughs. The counter to that, which you're seeing in some cases with
open source or people that have now trained on some of the large models, is that there's
going to be a shift towards model efficiency. And so that's number one, that people are going to
figure out how do we do these more efficiently with less compute. Number two, which is a big
sort of contrary thesis that I have, is that a significant portion of inference. So, you know,
if you break down training, you still need large 100,000-ish clusters of H-100, top performing chips.
It's expensive. Only the hyperscalers here to four have been able to do that. You can do some
training if you're using things like together compute in some of our companies without having to do
that yourself sort of like going back to on-prem versus colo versus cloud transition 15 years ago
but I think that you're going to end up doing a lot of inference on device meaning instead of going
to the cloud and typing a query like 30 to 50 percent of your inference may be on an apple or an
Android device and if I had a bet today it's Android because of the architecture over Apple but if Apple
can do some smart things. Maybe they can catch up to this, but some of the design choices
and the closed aspects, which have been great for privacy as a feature, may hurt them in this
wave. And you could already see, like, perplexity can actually be an assistant on my Android
device. I carry both so I can understand both operating systems. You can't do that yet on
iOS and Apple. But here's the insight. If 30 to 50 percent of my inference is my cashed emails
and my chats and my WhatsApp and all the stuff that I keep on my device, my photos, my health
data, my calendar, then the memory players may play a much bigger role. So now you have Samsung
SK-Hinix, Micron that are important here. If I had to come up with somewhat pejorative analogy,
I think Samsung is going to be more like Intel. I think that they're just a little bit sclerotic
and bureaucratic and slow-moving, and it's going to sort of ebb and decline. I think Micron
is a U.S. company, which is going to be more constrained by restrictions that are put on
export control. And I think SK being a Korean company is going to be able to skirt those in the same
way that NVIDIA has with distribution to China through Singapore and Indonesia, which is now
being investigated. But the memory players are likely to be ascendant. And so what heretofore was
a bottleneck on compute could shift attention, talent, money into new architectures where memory plays a
key role on small models on device for inference. With MX Platinum, $400 in annual credits for
for travel and dining means you not only satisfy your travel bug, but your taste buds too.
That's the powerful backing of Amex. Conditions apply.
You can get protein at home, or a protein latte at Tim's. No powders, no blenders, no shakers.
Starting at 17 grams per medium latte, Tim's new protein lattes, protein without all the work,
at participating restaurants in Canada.
I've always thought of memory as like a commodity. It doesn't matter if I have a MU,
chip or Scandes.
And that's what people thought about CPUs back in the day.
And then GPUs for just, you know, traditional video graphics.
And then the AI researchers came and said, well, wait a second, we can run these convolutional
neural nets on these GPUs.
And they reached into somebody else's domain of PlayStation and Xbox and pulled them in.
And suddenly, you know, it lit up this phenomenon that turned Nvidia from $15 billion to,
you know, two and a half or three trillion.
Do you think Nvidia has got a long runway?
I think that their ability with that market capitalization and that cash and the margin that they have, they can reinvent and buy a lot of things. So I think Jensen is a thoughtful capital allocator. I think he's benefited and caught lightning in a bottle over the past 10 years and now just particularly in the past six. I would not count Nvidia out. Now, what's the upside from $2.5 or $3 trillion to $5 trillion or $10 trillion or do they go back down to? I have no idea. So that's more of a fundamental valuation based on the speculation. But if you have $19.
95% margins. And your chips are $30,000. Could you shrink them down and sell for $3,000 and take
small margins but get more volume? And this was some of the debate that I think happened with the
release of Deepseek, where you had Satya basically talking about Jevin's paradox, that any one thing
might get more efficient, but the result is not that you have less demand. In aggregate,
you have much more demand. You know, the classic example of this is like refrigerators.
A single refrigerator back in the day was an energy hog.
a sudden you make these things more efficient and what happens it becomes much cheaper so if something's
cheaper you're you're going to buy you know more of it and they shrunk refrigerators down and so now
everybody had one in their garage and in their office and uh in their basement and so the aggregate
demand for electricity and then for all the components and coils and refrigerant went up not down
same thing with bandwidth you know if you have uh 56k bod modem which was like my first modem you know
dial up internet and all that kind of stuff on compi surf and you know you know you
Then you go to a T3 and fiber optic, you know, at the speed of light.
It is way more efficient, but your usage now is just huge.
You're streaming 4K videos and watching Netflix and trading on Bloomberg.
Whereas if you actually want to decrease use, the non-obvious thing that you would want to do is actually slow down speed.
I mean, put me on a 56K bod modem today.
I'd just like pull out my hair and never use it.
You couldn't even use Gmail.
Right.
Exactly.
Exactly.
Where do you think intelligence is the limiting factor of progress?
human intelligence? I think that the great thing about us is that we are, I don't know, 60 or 70% predictable in that so many of the foibles and virtues and vices of man and woman are Shakespearean. There are hundreds of, I mean, tens of thousands of years old for modern evolution, but with that, we are still irrational. You know, Danny Connman was a friend. Danny passed last year, just an amazing guy, but he could document all the heuristic.
and all the biases, which you study and write about.
And he's like, I'm still a victim of them.
Even knowing them doesn't really insulate me from falling to them.
It's just like an optical illusion.
You can know it's an optical illusion, but you still see it and you still fall for it.
So I think the limits to human intelligence are rooted in our biology.
And we have all of these embodied intelligence that have sort of been externalized
in calculators and in computers that help us to.
overcome that. And I certainly feel today that a significant portion of my day that might be spent
with a colleague riffing on something. And sometimes that's like great for a muse or tapping into
information or intelligence or some tidbit or a piece of gossip or an experience that they had. And that's
why like the diversity of cognitive diversity is really important. I'm complimenting that with
all day chats with perplexity and Claude and Open AI and, you know, any number of LLMs that might
hallucinate just like a friend and might be wrong about something, but might give me some profound
insight. I think what I'm interested in is at what point will machines, like if intelligence
is the limiting factor on progress in certain domains or areas, it strikes me that in the near
future, the machines will be able to surpass human intelligence. For sure. And if that's the case,
then those areas are rife for either disruption or rapid progress. Well, I think
you know what peter drucker never imagined when he was talking about knowledge workers and the shift from like blue collar workers to white collar workers was that machines would actually most threaten those professions and so you take some of the most elite professions doctors uh the ability to do a differential diagnosis today i take my medical records as soon as i get them from top doctors and i still put them into loms to see what did they miss and sometimes it unearthed some interesting correlations is there a scientific paper from the past ten
years that might have bearing on this particular finding or something in a blood test.
So that is really interesting.
Lawyers, language is code.
Multi-billion-dollar lawsuits sometimes come down to the single placement and interpretation
by a human of a word.
One of the things that Danny Connman recognized and he published this in his last book,
Noise, was that the same case, the same information, the same facts presented to the
same judge at different times of day or presented to different judges.
They are not objective.
And he actually thought that for justice and fairness, that you would want the human intelligence applied to these situations with their biases to be either complimented or replaced by AIs that had a consistency and fidelity in how they made decisions.
So those are all high-paying jobs with lots of training, lots of time to gain the experience, the reasoning, the intelligence, and many of those things are at risk.
government itself, the ability to legislate, make decisions, you want to be able to capture
and express the will of the people. But increasingly we have social media that's able to do that.
It could be corrupted, but there's mechanisms to figure out how do you really surface what does
the populace care about. And some in Europe have tried to do these things. The key thing is
always, like, what's the incentive and what's the vector where somebody can come in and corrupt
these things? But interestingly, I actually think that the people whose jobs
are like most protected in this new domain are blue-collar workers.
A robot today can't really fully serve a meal,
and they cannot effectively,
even though every humanoid robot tries to do folding laundry,
and there's still basic jobs that are not low-paying,
but they're arguably safer than ever.
And this ties into immigration and technology and human replacement,
but people that are doing maintenance, people that are plumbers,
many of these things are standardized systems,
but it's like the old joke about the plumber that comes in
and he comes in and he taps a few things
and like suddenly the pipes are fixed
and he says you know how much is it is a thousand bucks
a thousand bucks for that he's like it was a five dollar part
he's like yeah the part was five dollars but nine hundred ninety five
five was knowing where to tap and where to put it
and so I think that there's still a lot of this like tacit knowledge
and craft and maintenance that is going to be protected against
the rise of the machines that are going to replace most of the white collar
intelligence and knowledge workers.
Do you believe I think
I think it was Zuck who came out and said, you know, by mid this year, 2025, that AI would be as good as a mid-level engineer.
Encoding.
Yeah, for sure.
What are the implications of that?
Walk me through, like, the next 18 months, if that's true.
Well, again, if you take this in the frame of Jevin's paradox, then a lot more people are going to be able to code in ways that they never have.
And in fact, I think it was like Andre Caparthe, who was on Twitter a day or two ago talking about who he himself as a coder was basically just talking in natural language and having the AI, and I forget which one he was particularly using, generate code.
And then if it tweak something and he wanted to change a design and make a particular sidebar a little bit thicker or thinner, he would just say, make the sidebar.
And it was able to do that.
So I think the accessibility for people who never coded, never programmed to be able to come up with an idea and say, oh, I wish there.
there was an application that could do X, Y, and Z to be able to quickly do that is great.
For the big companies who employ many coders that are competing at an ever faster speed,
you know, you have somebody like Mark Benioff who's saying that they're not hiring any more coders,
at the same time that he's still talking about the primacy of SaaS, which is this weird contradiction.
But I would suspect that maybe you lose 10 to 30% of the people that you normally would have hired,
but the people that are there are still like these 10x coders, and now they have a machine that's
helping them be like 20 to 100x.
Do you think margins go up then for a lot of these companies?
I don't know.
I always feel like margins are always fleeting in the sense because it's like a fallacy of composition.
One company stands up a little bit higher and then everybody else is on their tippy toes.
So I think it just changes the game, but I don't think that you have some like permanent margin.
The only time you get really large margins is when you truly have like a monopoly.
Like, Nvidia today until there is an alternative, whether in architecture or algorithms or in something else, you know, they've got dominant margins because they can charge super high.
high prices because there is no alternative. So when you have that, there is no alternative. But in
many domains, given enough time, there's an alternative. And then margins just resettle and look at
cars. You know, the average margin on cars. Cars today are 10,000 times better by every measure of
fuel efficiency, comfort, air conditioning, satellite radio. But those margins never persisted
as being like permanently high. I always come back to sort of Buffett in the 60s with the loom,
Right. I always relate everything to the loom because everybody was coming to him when he first took control and they're like, oh, we get this new technology.
Right. Yeah, but all the benefits are going to go to the customer. It's not going to go to me.
I mean, look at AI right now. Billions of dollars being spent. All the foundation models, all the competition. The second that Deep Sea comes out, it suddenly accelerated the internal strategic decisions from Open AI of when are we in release models. And so the flurry of all of this benefits us as consumers, always and everywhere. And so we were looking at it.
internally installing some new AI system to surface all of our disparate documents.
And there's a bunch of these.
And our Gmail, our Slack, our Google Docs, our PDFs, our legal agreements, and just
have a repository with permissions and all this.
And it's expensive, in part because going back to that ignorance arbitrage, somebody could
charge us a lot of money to do that implementation.
And my default was, just wait.
Why don't we wait six months?
Because this is going to be available from all of the major LLM providers to
today that want to get the enterprise accounts.
And let's just wait, and they'll compete and compete, and it will accrue to us as users.
Talk to me about all these models.
People are spending hundreds of billions, if not trillions of dollars around the globe,
competing on a model.
Do you think that's the basis of competition?
How does that play out?
And then you have Zuck who's trying to open source it, and he spent, I don't know,
what, 60 to 100 billion, probably by the end of June this year, open sourcing it.
So he's basically like, I wouldn't say he's doing it for free, but what's something?
the strategy there. First you have
just straight head-to-head competition
Anthropic and Claude and
Chachapitian opening eye and others.
Then you have sovereign models.
So there are countries that are saying we don't
want to be beholden to the U.S. or to China.
We funded a company in Japan called
Sakana. This is one of the lead authors from the
Google Transformer and a guy David Ha
and just incredible team
and they are actually trying
to do these super efficient
novel architecture so they're not trying to
treating these multi-100,000 clusters. Their latest
model, which was based on this evolutionary technique, was like eight GPUs, which was wild.
So that's one trend.
But on the strategic question for Zuck, I actually think that he's probably playing at the
smartest of everybody, which is, and he's been open about this.
We're going to open source the models with Lama, and we're going to let people develop
on them.
Why?
Because the real value is going to be in the repository of data, longitudinal data, deep data.
If you go back 10, 15 years, like the number one thing in tech was big data, big data, big data,
big data okay well now if you actually have big data you want to use whatever models are out there to
run on your proprietary silo of data so the people that i think are going to be advantaged meta why
they've got all my WhatsApp message apple doesn't meta does they've got all my instagram likes and preferences
and every detail of how long i spend and linger on something and what i post and all of that content my
facebook which i don't really use any more other than when instagram you know cross post to it but that is super
valuable. And they care about that in part because Zuck needs to route around both Apple and Google.
He does not have a device. I mean, you've got Oculus and MetaQuest and whatnot, but that's not the one.
This Orion with the neural ban from the company Control Labs that we funded, which was for the non-invasive
brain machine interface to be able to use free gestures, which is an absolute directional arrow
of progress, right, disintermediating the control surfaces you have, remote controls and all that kind of
stuff and just being able to gesture, map a device to your human body is absolutely the trend.
But he's thinking about how do I route around these devices and how do I have a long repository
of everybody's information and use the best model that's out there. And the great thing about
open source is it'll continue to improve over time. So I think that that's a winning strategy.
I think the people that are continuing to develop ever better models, unless they have proprietary
data, are going to be sort of screwed. Bloomberg should do really well. I mean, the huge amount
of proprietary information, all the acquisitions that they've done over time, being able to
normalize merger data and historic information and the longitude of price information and
correlations between different asset classes, being able to run AI on top of that is like
a quant's dream. So I think that people that have hospital systems, arguably some governments
have used efficiently, but anybody that has a proprietary source of information, clinical trials,
failed experiments inside of pharma companies being able to do that is the real gold and the large language models are effectively like over time i think going to trend towards zero in a commodity excavator of that data so the moat is really in the data i think because everything will be sort of comparable running on top of it the data sitting by itself is like an oil well that isn't mined you know we're not gas finding that isn't fracked so it needs to be extracted and i think that most likely open source but in some
In some cases, enterprise partnerships between Anthropic or Open AI with some of these siloed
data sets will unleash a lot of value.
So aside from meta, what counterintuitive sort of public companies would you say have
like really interesting data sources?
That's a good question.
I haven't really spent a lot of time on that to figure out who's got crazy amounts of
proprietary data.
Pharma would be a good one because obviously they're, you know, tracking both their successful
but they're unsuccessful clinical trials.
There's a lot of information in the unsuccessful data,
like the things that failed that you can learn from.
You could argue that Tesla, of course,
who I'm very publicly critical of,
like if they truly are collecting a ton of road user data
from every Tesla that's being driven,
that would be valuable.
Anything where there's a collection,
a set of sensors,
a repository of information that is owned by them.
Anything that we've signed off on
that your data is free for us to use,
Like meta.
You think of Tesla.
They should have the best mapping software in the world.
They literally drive millions of miles every day.
They can update everything.
They can locate police.
They can locate speed cameras.
They can get real-time traffic.
Weather patterns.
Yeah, totally.
But, okay, the flip side of that, though, right?
Taking sort of like the opposite view for a moment, Netflix.
Netflix has all of our viewing data.
They know what we like.
They know what you like.
They can make a perfect set of channels for you.
And the recommendations are reasonably good approximations of adjacencies to things
that you liked, but they haven't been successful, nor has the human algorithm at, say, HBO
in the past of like perfectly creating the next show that you really want to see. And what's
interesting about that is they've put a lot of money into this, but it hasn't yielded the recipe
maker for like the next perfect show. And oftentimes the thing that you want to see is almost
something that's orthogonal from what you've been watching. Yeah. Like I heard Anthony Mackie,
who's in, you know, the latest Marvel movie
talking about an expectation
that he'll be, somebody was like,
how long do you think you'll be doing Marvel movies?
And he's like, I think probably like the next 10 years.
And I'm like, probably two or three
because people are just bored of this stuff after a while.
Like nobody wants to see another,
I don't want to see another Marvel movie.
I like the adjacency of like the boys.
Yeah.
Which was like the dark, you know, superhero kind of movie.
And I think trying to find groups
that have proprietary data
that,
have some predictive value, the most value probably for society, I don't know if it'll be entirely
captured by companies, is just all the scientific information that we have. Because I'm absolutely
convinced that we are going to have machines doing science 24-7. Well, so talk to me a little bit
about, I want to come back to Tesla in a sec, but let's go down the science. Why is nobody sort of
taken every study published in a domain, say, Alzheimer's research, popped it into GPT and
be like, where are we wrong? What studies have been fabricated or proven not true that we're
investing research in, right? Because studies get built on studies and studies. And so if something
from the 80s came out and it's like completely false, we've probably spent $20 billion down this
rabbit hole. And what's the next most likely thing to work? Is anybody doing that?
I have to imagine they are because deep research came out, you know, today or in the past 24 hours
from Open AI, which is sort of their model with a better engine, so to speak, than Google's deep
research, which itself was impressive, both because of its ability to search many sources and then
the ability to, you know, sort of, I think it was either there or through no pekelm, to conjure
the podcast, which at first was a static presentation of near human quality voice, but now you can
interrupt it like a radio call, which is super cool. But you can say, go through the past 15 years
of PNAS papers or science and nature papers around this particular topic and find correlations
between papers that do not cite each other or tell me any spurious correlations and the beauty
of all of that on sort of the information or informatic side is eventually you will have a materials
and methods output of that that you can feed into something like bench laying or some of the
automated lab players to actually say like run the experiment so I'm absolutely convinced like
high certitude I don't know exactly which company will do it we've invested in some they haven't
worked. We'll invest in more. Hopefully they will. But this directional hour of progress of the idea
by analogy of machines doing science 24-7 automated is going to happen. I'll give you one or two
analogies. If you were a musician back in the day, if you and I were starting a band, we would
have to go and get studio time here in New York City or electric ladyland or whatever and you bring
your instruments. Okay, maybe you could rent the instruments there. And then garage band and logic
and Pro Tools pops up. And now we don't have to be in the same.
physical space.
My instrument is virtualized.
I can create a temporal sequence of notes.
I can layer them in.
You could play drums.
You could do vocals, blah, blah, blah.
Science is the same thing.
I can be on the beach in the Bahamas and conjure a hypothesis and use one of the AIs to test
the hypothesis and look at past literature searches, see freedom to operate, see if there's
white space, and then tell one of these cloud labs that is literally like sending something
to EWS back in the day and say, run this experience.
experiment. And the beauty of this is the robot will do the, well, here's the beauty, the virtue
and the vice. The robot will do the experiment and it should do it perfectly because it's digital
and it's high fidelity. The vice is so much scientific breakthrough has often happened because
of serendipitous screw-ups. Right. And so you want almost to engineer like a temperature on an
AI model a little bit of stochastic randomness so that the machine can sort of screw it up to see what
might happen because, you know, penicillin and Viagra and rubber and vulcanization, all these things
happen by, like, random processes, and then post facto, we're like, huh, that's funny. And then, you know,
you run with it. But then the machine will say, here's the results. And it will then reverse prompt
you and say, do you want to run the experiment again, but changing the titration of this to 10 milliliters
instead of five, and you just click a button from the beach, and you're like, yes, and the robots run it.
Whoever ends up creating and building that, I think is going to make a fortune. Well, you don't
you have to decide the robots could decide yes
right and you're sort of out of the loop
and it just outputs science
totally that would be so interesting
before we get to that point we'll probably get to the
point where models make themselves better
is that the point where
it really starts to go like parabolic
almost I don't
I don't know I definitely see that models
can improve because you can even argue
like deep seeks R1
is a model that was improving upon
outputs from chat chitpD
and so on so
I definitely think that there will be this recursive improvement, but you're still going back to being rate limited by time and biological or chemical reactions, you still need to instantiate this into a physical experiment.
And so you can model and simulate all you want, but then you actually have to, like, do the thing and make the compound.
And so those still take steps and, you know, organic chemistry and there's like 20 reaction steps and people optimize to like reduce them down to six.
and you still need the physical reagents and the right temperature and the experimental design.
So I still think that that's going to be the bottleneck, but for sure, like the ideation and experimental design is going to, that's just going to absolutely explode.
And then you'll have these automated labs with lots of different instruments where robots will be able to take out a sample from a centrifuge and put it into the next thing.
And like, you don't need humans to do that any more than you need humans to assemble sophisticated iPhones.
I mean, we still have very cheap labor in Foxcon factories in China and Vietnam and elsewhere now doing that.
But there's no reason for that over time.
Isn't that low-hanging fruit, though, just like look at all the work that we've done so far and tell us where we're on the right track and where we're sort of like we're going astray.
Totally.
Like, there's nothing preventing that from happening today.
I mean, it's very like David Deutsche and it's like, if it obeys the laws of physics, it should be possible.
There's nothing about this that is like totally speculative and fantastical that it doesn't obey the laws of physics.
The other project that I wanted to, I was thinking about sort of doing is just calling like a prior art.org or something and having AI read through all the patents and make the next adjacent sort of like patent, like improvement, and then just publish it because then there's prior art.
Right.
Well, yeah, there will for sure be AI patent trolls if you put it negatively.
This would dissuade that.
I mean, in a sense, it would sort of be making prior art for as much as you can 24-7.
There are companies that do this where they have creative patent filers for continuations in part so that they can keep sort of the life of this going.
But the rise of agentic AI, you can have some crazy idea, some brain fart hit, you know, 930, 10 o'clock a night.
And you just had, you know, cocktail with friends.
You're like, oh, like imagine, you know, if that exists.
Well, do the research.
Does this exist?
And if it doesn't exist, can you, you know, write a patent and sketch a diagram for me and file it?
and start and incorporate a company.
Now, all those things have to go at the speed of like certain processes, but all of that
could be done overnight where you literally wake up and there are multiple agents working
on your behalf that have filed a patent, created a design, incorporated a company,
possibly even put it out to some group and raised money for it overnight that have opted
into it.
And I don't know if there was like successful enough and it hit a bunch of criteria.
I might allocate some capital into an account.
to allow a robot AI to actually receive pitches, respond to it,
and create a small portfolio to allocate as an experiment.
So you can see this whole thing as just like a human idea,
or maybe one inspired by interactions with an AI,
that by the time you wake up, you have a company started
and the basis for people to actually do work.
Like that, in 10 or 20 years, people look back and like,
how did we not see that coming?
And look at all the jobs that are being created
because every single person is now creating
and has like six virtual companies.
The future is going to be wild.
Yeah.
Talk to me.
Let's go back to Tesla for a second.
Why the hate on Tesla?
Let me say, I think Elon is amazing at certain things.
Elon is arguably the greatest storyteller, fundraiser,
inspiration for anybody in the past,
maybe in all time, truly.
I think his relationship with the truth has been questionable.
And so in Tesla particularly, I think there was a time where the short sellers started to identify things, not because they just hated the company or hated the future or any of this.
And he was able to very shrewdly weaponize the us versus them.
They're trying to kill us, right?
Most short sellers that I know happen to be very disaffected people.
And they have a chip on their shoulder.
And to me, the motivating force and the incentive for them is not that they just want to make money, but they want to be right.
and they want to be right because they've identified somebody that they think is
intentionally doing wrong it's the same thing as an investigative journalist it's the same
thing as a opposition research for a politician it's the same thing for somebody that is
trying to debunk a sunday preacher charlatan that basically is almost intellectually competitive
to say you are trying to pull the wool over these people's eyes and i know what you're doing
and i'm going to call you out on it and so for
me with Tesla, I think that they got away with accounting fraud on warranty reserves and a whole
bunch of other things. I think there was a lot of prestidigitation and magic of look over here
while we're doing this. And today it doesn't matter because they got away with it. But I think
that there was not the same kind of honesty that I would ascribe to Jeff Bezos in how he built
Amazon. And, you know, raised a few hundred million dollars of equity. You could look at stock-based
compensation. You could look at debt as capitalization, but created this monster that is
profitable cash flow positive and never raised another dollar of equity. And Elon raised north of
$50 billion, took out $50 billion, treated it like an ATM, said, I'm never selling a share
and sold lots of shares, and whether he had to do it to buy Twitter or whatever. I just,
I don't feel it was done as honestly as other entrepreneurs that I greatly admire. Now, that said,
SpaceX I have no issue with. I think SpaceX is an extraordinary company. I think it's an incredibly
important American company. I think without it, we would be at a massive disadvantage. I think it is
truly a national treasure run by Gwynne Schottwell, incredible engineers. We've backed a bunch of
these engineers that have come out from Tom Mueller to that I just, I think the world of. I've just,
yeah, been much more critical about Elon's relationship with the truth as it came to Tesla. And in many ways,
I felt like the whole thing was unnecessary.
Do you think those are one-time things, or they're systemic, and they crop up every few months or something?
In his personality?
Yeah.
He's past the stratosphere now.
Like, it's, you know, he's proximate to power in ways that people can compete with.
If you're an investor in or you're Sam Altman in Open AI, you're not only worried about competition, you're worrying about a personal grudge from somebody who has the ear of the president that can weaponize all kinds of,
systems of power from the TOJ to the FTC to the FBI.
And I would be very nervous being an adversary with that kind of power.
So.
Altman came out and said he didn't think Elon would use that power against him.
Which is a nice and smart thing and a necessary thing to say publicly.
And I think that Elon has even said, like, I won't use that.
What's the saying?
Power corrupts?
Yeah, an absolute power corrupts, absolutely.
But I don't know.
Like, you're in a position of power, a doge.
and OMB and office of personnel and are you and you have influence and you can shut some of these
things down does Elon love the SEC um you know he's been pretty vocal about that institution
does he love the national highway transit safety uh so if these things are gutted you know
i think you've got more free rein to shut down criticism you know you've got similarly
the best entrepreneurs when short sellers are like you know saying something they don't
want to ban short selling. They don't want short sellers to be arrested. They just prove them
wrong. And so... My favorite story about that was Brad Jacobs and his, the short report came out,
the stop draw, precipitously. He borrowed $2 billion, bought back shares. And he's...
Yeah, there's nothing to it. Right? Right. Totally. We're going to double down and go forward.
Yeah. I think he turned that to you into 10 or 8 or something. Like, it was just crazy.
Hit Pies on whatever you're listening to. And hit Pies on whatever you're listening to. And hit
play on your next adventure. This fall
get double points on every qualified
stay. Life's the trip. Make the
most of it at BestWestern.
Visit bestwestern.com for complete terms and
conditions. You know that
feeling? When work scattered
across emails, team chats,
sticky notes, and
someone's memory? To do
us help small teams stay on track
without needing to set up a complex system.
Assign tasks, see
deadlines, and keep things moving.
All in one place.
It's simple.
Really.
Visit todoist.com and bring a little clarity to your chaos.
So for me, I don't know, 13 or so, like after I was bought, Bar Mitzvitt, I became atheist.
And I just wouldn't, I would see like these preachers exploiting people.
It just like irked me.
And it irked me in this.
It wasn't in some, I reflected on this over the years.
It wasn't in some virtuous, holier than now kind of thing.
It was intellectually competitive.
It's like, I see what you're doing.
You're running a con.
and I want to call it out, and it wasn't rooted in, like, self-virtue of, like, pursuit of truth.
The real thing when I, like, thought about it is, no, I want to show that you're cheating people.
Short sellers are necessary to a well-functioning market, right?
We need to hear both sides of the story and make our own judgments and decisions.
What point do you think that computers are going to really make most of the investing decisions?
Well, you could argue today they are, not because they're doing reasoning and analysis and fundamental work,
but because the structure of the market
is so dominated by passive indexation,
and that is effectively an algorithm.
And that algorithm says $1 in buy, $1 out sell,
and in both cases indiscriminately.
And so you just have a flood of money that goes into the market
and these indices buy everything
and it becomes this massive market cap-weighted accelerant
and then people say sell and then the money just comes out.
So in the past, I don't know,
know, 10 plus years where this has really become the case with Fidelity and BlackRock and
State Street and others, the ETFs, which were well-intentioned. You know, you go and listen to Buffett
back in the day. It's like, just put it in the market, right? It's hard for active managers
to out-compete. Definitely the case for the past 10 or 15 years. But I do think that we will
see a return to active managers that are able to discriminate true fundamentals, in part because
I think that the cost of capital is just going to rise and all the funny money of the past.
10 years is going to wash out. Two questions, two rabbit holes I want to go down here. One,
at what point do you think active managers, an analyst is replaced by AI in the same way that
Zuck is saying an engineer at meta is going to be replaced by AI? Already there are
AIs that can not only go through Q's, 10 Q's and 10Ks and can listen to quarterly earnings
reports and CEOs that are talking at conferences or on podcasts and can get an emotional
sentiment can see where they're varying their language in ways that only the subtlest of analysts
in the past or portfolio manager could do. And I think the most valuable thing that AI is going to do
when you ask it questions and it comes up with the answers and assuming those answers are accurate
and cross-correlated and double-checked, they actually say, here's the five questions you didn't
ask. And so that is going to unleash real insight. Now, there is still this human aspect of being
able to look at somebody and decide do I trust them or not. And I think that the best analysts
are able to say very Buffett-like or Joel Greenblatt-like, is this a good business? And there's
ways to measure that, like a fundamentally good business, even if an idiot was running it. And then do I
think it's had a good price? And is therefore my expected return going to be high? And do I trust the people
that are running it? Because ultimately, I am allocating capital to them in the same way that somebody
allocates capital to us. I like the virtue of our private markets because I am less,
we're still beholden, but far less beholden to a day-to-day market, you know, Mr. Market fluctuation
of, you know, manic depressive, positivity or pessimism. We have 10-year locked funds. We're able to make
long-term bets. It's arguably this great source of time arbitrage when everybody else is looking
and discounting back a year or 18 months or two years. But, you know, you think about the three main
sources of edge, and we've talked about this in the past, but informational, analytical, and
behavioral. Informational advantage used to exist a long time ago. Regulations like Reg FD and avoidance
of insider trading and information that tried to equalize the playing field. In addition to a huge
influx of really brilliant people that are able to use cutting edge data tools, having an information
advantage is really hard. Having an analytic advantage where AI can play a role in that, let's assume
we all have the same information, and I don't have any better intel or information.
Like, for example, it just doesn't aside on the information advantage.
There was a hedge fund, which I won't name, which was very cleverly going and actually buying
stuff online from Adobe.
Every time they did, they got a piece of legal inside information, which is Adobe's web
URL, actually, when you made a purchase for Creative Cloud, would tell you 4,723 or whatever.
And then, like, six hours later, they would go, and it was like four million, and so they could infer and extrapolate what the sales were based on this, because when they bought it six days later or six hours later or whatever, they saw where they were in the queue and you can sort of extrapolate, that was legal to do.
But once that signal leaks out and somebody's like, oh, that's a clever way to figure that out, then it rapidly erodes.
So informational advantage, hard, analytical advantage, brilliant people combined with brilliant technology, really hard.
And then it goes into this last one, which is the behavioral.
And that, to me, is the persistent thing.
Now, AIs over time will understand us in many ways better than we understand ourselves.
Google already arguably knows more about you than the closest people in your life
based on the things that you search for, search in private for.
And AI will as well.
And already there's an eerie moment that I appreciate because I've given myself over to
the information gods.
There's a required energy to try to maintain private.
and I just feel like it's not worth it.
We can talk about privacy
because most people basically just want to keep private,
you know, their sex life, their bathroom time,
and how much money they have,
unless you are super rich or super poor.
Because if you're super rich, you broadcast.
You know, you're on the Forbes 100, 400.
You're showing the house you just bought,
the art you just bought.
You're signaling your wealth.
And if you're broke, then you're really poor.
There's people that are on Twitter like,
I'm dead ass broke.
I have no money.
And they literally, it's the people that are in the middle
that are middle class
but want people to think
that they're upper class
or people that, you know,
and so everything else for privacy,
I think is out the window.
But the reason I was saying this is
I've given myself over to the information gods
and when I go to chat GPT
because it has memory
and it's constantly updating it,
there are times where it remembers things
and I'm like,
how did you know that?
And I forgot that it was like
from a search three months ago
where I mentioned something about my kids
and a place that I like to vacation
and part of me actually appreciates
that repository is compounding,
but it does sort of scare you
because I remember the things
that we talked about
and then if you were like
oh I heard you went to that
I'd be like well who'd you hear that from
but now if I asked the AI
who'd you hear that from
it would be like you told me that
you told me
wait till it gets on your device
exactly and then okay
well let's go back up this rabbit hole
a little bit here
back to passive indexation
so the rise of this is really
post 2010 right
the mass rise of passive indexing
we've never seen
well we did during COVID
but there was so much money thrown into the system
what do you think the second third order
effects of this will be
especially in terms of volatility
or something unforeseen
well you saw this a little bit with just
how quickly the market reacted
with a single largest one day loss
of $5,600 billion
within video just because of
the fear over deep seek and the fear
was a cascading
traditional information cascade
oh my gosh what does this mean for the expectations we have about demand for compute and and
cap x and the expenditures that people are going to have do we need to rethink this mental model
and and so i think that there'll be things like that that whipsaw and shock people
do they become reflexive at some point like maybe at 500 million or 500 billion it didn't
but had that hit 700 or 800 or a trillion like does it then start the auto selling and the
It's a good question.
I don't know.
Obviously, systems where there are significant leverage in the system are ones that are most prone to that,
these sort of Minsky moments where, you know, things are going fine, going fine,
and suddenly they just collapse.
And usually that's where you have a lot of leverage in the system.
And sometimes it's hidden leverage.
I don't know other than some of the two X or three X levered EPFs that that's the case in traditional market.
But where does this go from here?
I'm not sure on the passive active piece.
What will break this other than if you were to have widespread news reports of a handful of
active managers that are suddenly beating the market, you know, decisively, and they're pointing
to the structure of the market. And so there's a rebalancing where people start to shift out
of these things. On the volatility piece, what's interesting is over the past five, six,
seven years, maybe five especially, a lot of LPs allocators have gone into private credit, have gone
to private equity, in part because they are mechanisms of muting volatility and the vicissitudes
of the market because you don't have daily mark to market. And so there's been a little bit of this
perverse incentive. But as Buffett says, you know, what the wise do in the beginning, the fool does
in the end, and then these things get overdone. And so I'm actually worried about some of those asset
classes where private credit in particular, I think, was wise to do a few years ago and now is
overdone. You have another phenomenon, which is every major sophisticated, large private equity firm,
Apollo, KKR, Carlisle, etc., are all starting to think about or are actively thinking about
both permanent capital in the form of insurance vehicles like Apollo and accessing retail in a
huge way that many people see retail being the next wave of this.
And when you say retail, you actually just mean normal day-to-day consumers.
Individual investors that might have been on Robin Hood and could never hear before access
Apollo or Carlisle, but in aggregate, you're talking about trillions of dollars of investor money.
And so I think that they're going to be tapped.
They're going to be into these vehicles.
That will present new interesting financial vehicles because you're going to have to find
ways to give people liquidity for these things.
And so I've heard about some interesting things, actually from a friend Mike Green,
who I think is a really smart practitioner and student of markets.
was one of the earliest to this passive active piece.
He was one of the earliest to understanding
the mechanisms behind the scenes for the SPAC movement.
He's early now to this idea of uniswap,
which has a certain mechanism
that provides liquidity by having, let's say,
like 80 or 90% in treasuries
and 10% in some underlying,
and you're able to swap out some illiquid thing
for effectively some liquid pool
up to some point.
There's some repricing.
and he's been thinking about something that like Apollo is doing with State Street
and that this portends a movement into these almost artificial like if I would have talked
about ETF 20 years ago people are like what is you know yeah don't we have mutual funds
already and they're like no but they're going to go super low fee and you'll be able to trade them on a
daily basis there's something here to watch about the flood of retail money that will go
into illiquid alts, private equity in particular, and new vehicles that are formed to be able
to provide liquidity because of that. And I think that that's both going to be really interesting
and potentially, to your point, creates something that sets up some massive blow-up.
The other thing that I want to come back to in the rabbit hole here is you mentioned
persistent advantage is behavioral.
Yes.
Talk about that in the context of humans and how do we create an unfair?
advantage in a world of AI for humans like what are the ways that we can like your kids like how are
you teaching them to navigate this in a way that gives them an advantage an advantage um well a behavioral
advantage so you know you go back some years and it was like okay we can still beat computers and chess
done we can still beat them and go done we can beat them in video games done okay but we still have
creativity done now of course human creativity is not dead but every day I am doing
something creative on AIs that I can't do.
I cannot paint, I cannot draw, I cannot conjure.
I like taking photographs, and I like the composition of that.
But I can engineer prompts, which itself is an act of creativity, and get the most inspiring
muses and results, and I can take works of art that I like and put them in and ask it
to describe it and do it six times, particularly in like mid-journey, and recreate from the
prompt some alternative of it.
there was even an artwork that I loved, which was this mix mash of superheroes that looked
like it was put through a blender. And I was going to buy the artist's work. And then I couldn't
describe it to Lauren, my wife. And so I put it into Mid Journey and described it. And then I just
clicked a button and I made four versions of it, I guess 16 versions because each one was four. And
it was insane. And I was like, why am I going to buy this? Because I just recreated it. And I felt
morally bad because I wasn't copying. But I had a perfect description of the style. But I had a perfect
description of the style. And so I thought that that was pretty wild. So these are tools that I think
kids should be using. They should be learning. It's just like a language. And I think they need to be
versed with it, in part to understand the domains to avoid because they're going to be not void of
emotional or aesthetic or moral value, but you're not going to make money. And so my wife and I
debate this all the time about dance. Dance is amazing.
um we started dating and she took me to this dance thing i was really not into dance at all
at parsons theater it was this guy uh uh david parsons uh at the joyce and he has this
performance called caught and me being into science and technology i'm watching a bunch of people
dance i'm not really into it and all of a sudden uh it's a weird ambient sound very electronic
very tron like and there's a guy you know in white pants and that's
and then a strobe light goes on.
Anyway, and the strobe starts flashing,
and all you see as a viewer
is this person floating in the air.
But behind the scenes, what they're really doing
is jumping, jumping, jumping,
perfectly timed to the choreography of the strobe.
So you see them caught,
but they're doing this crazy kinetic athletic thing, right?
And I was just like, that's super cool, right?
Everybody should see this, right?
And it's just like inspiring.
Now, that said, we had this debate afterwards
because she's like, you know, these dancers make no money.
And I'm like, well, market forces would say that there's too many dancers or there's not enough demand.
And they collect like unemployment insurance for half the year because they have to work other jobs and they don't have this and it's just wild.
And so we got into a debate about, you know, what is the societal value of this?
Now, I found it valuable and I would go and I would pay money and be a patron, but I wouldn't want my kids to go and pursue that unless it was that they were solving for just the aesthetic passion that they had.
But I think that there's very few crevices where AI will not creep in and either be able to do the thing nearly as good, including you think about the compression that we all enjoy of a Game in Thrones episode, the compression of the compression algorithm of all of that talent, set design, special effects, screenwriting, a lifetime of acting and performing, you know, just swish down.
one of our companies runway ML
and there's others that you can
conjure today it's 10 seconds
but tomorrow it'll be two minutes and
full feature films with no key grips
no lighting no costume design no set
design no actors
and voices entirely generated
by AI and so
does that
strip this art of its soul
kind of thing
I don't think so I think it just creates a new form of art
just like Pixar you know for all the people
that were doing Disney Mickey Animation's
by hand, and then suddenly we get this 3D rendered graphics and incredible storytelling.
That will be timeless, but the tools that we will use will very rapidly replace these things.
So I think our kids should be embracing and using all of these tools.
The only tool that I restrict them from is TikTok, but otherwise, and that's for a variety
of bad influence and Chinese Communist Party.
But otherwise, I want them learning how to use every tool as they would, every appliance
in a kitchen.
and outside of those tools
where do you think the advantage comes from
is like a networking advantage
and who you know become more pronounced?
I think it's this.
I think it's human to human.
I think that if you always can frame things
as like what's abundant and what's scarce,
in a world where there's going to be
an abundance of access to information
and abundance of access to
creative construction of things,
art and literature and movies
and just by the way as an aside,
I also will take boring PTA messages from our school, and I will put them into large language models and send them to my wife where I'm like, do this in the style of Matt Levine, the Bloomberg Daily Writer, or do this in the style of Shane Parrish, or do this in the style of Al Swarengin from Deadwood.
And it's just, it's absolutely entertaining and brilliant and take something that's so boring and cliched and hackneyed and like it just brings it to life, right?
I do the same thing.
So it's a lot of fun, right?
It actually makes us stuff interesting.
But the advantage is going to come in, what do I do with that when I output it?
I enjoy it for a second, but then I share it.
And so all of these things, all the value, all the market capitalization of all social media is about sharing.
And I still think that we're going to produce, we're going to share.
And what becomes scarce is this is like human connection because we are still human and we still want that.
We want to be hugged and we want intimacy and we want to laugh with each other.
I mean, I'll share just a quick aside from this.
like two or three months before
Danny Connman died last year.
Lauren and I went over with
a filmmaker and another
woman and we had
dinner with Danny and his partner
Barbara Tversky, who was Amos, his wife.
And we were talking
about aging
and getting older and memories
and Danny had this great point about
that the pleasure of pleasurable
things
got less pleasurable
but less than the pain
of painful things got less painful.
So for him, the loss of a friend,
you became a little bit anesthetized to it.
The first time you lose a friend, it's tragic.
But when all your friends are dropping dead,
it's like, ah, this is just happening.
You know, the first person you know gets divorced.
And all these things over time,
the half-life of the pain just decreases,
you're still losing the pleasurable thing.
So food didn't taste as good,
and wine didn't taste as good,
and music didn't sound as good,
and sleep wasn't as good,
and sex wasn't as good
and all these pleasurable things
but he thought that the pain was less painful
than the pleasure was less pleasurable.
And so I thought that was interesting inside.
Barbara had a different view
and she's still alive
and I'm taking some license to share her view
but she said no, it's still painful.
And the main reason she said that pain is painful
is that these memories I have of Amos
or I shared that moment with this person
and the great human feeling
is commiserating about the thing
that we experienced together
or the memory and laughing hysterically
at which I still do with my childhood friends
of this shared moment
and AI will never get that
another person won't get the inside joke
and so to your question about the advantage
the advantage isn't that human connection
because we are still human and we want that
we pine for it
and so I thought that was a really profound counter
to Danny's view
which is that when you lose these people
you lose the partner to amplify that emotion, a good one or a bad one.
And I think that being able to have like uniquely human experiences,
understand each other, support each other,
that that is still going to be an advantage.
A lot of the things you said,
what sort of the common thread with them in my mind
is we feel part of something larger than just ourselves.
Yes.
Like relate this to working remotely or,
and how this interacts with other things, right?
where we might not feel part of something larger than ourselves.
And remote work is a great example where you sort of,
the ability is there,
whether you do part-time or full-time,
to shut off your laptop and the world looks like you.
Right.
You're not forced to interact with people with different political views,
different socioeconomic status.
You surround yourself.
So you don't feel a part of something bigger.
And that changes how you vote.
It changes a whole bunch of things in your life, I would imagine.
I'm speculating here, but I'm sort of like thinking out loud.
No, I think a lot of,
A lot of your values, a lot of our values, and again, I'm going to invoke Danny here in a conversation before he passed away, was that you may think you think the things you think because you analyzed and you reasoned or whatever.
But no, the reason you think the thing you think is because of the five or six most important people around you.
And they sort of believe something and you believe them.
And so, again, an information contagion and you will tell yourself that you believe it because you really thought deeply about it and you reasoned through this.
But no, the reality is you believe things
just because there's this social phenomenon.
So working from home versus working in person,
there are so many.
Everybody is here Monday through Friday now.
You know, at first it was Monday through Thursday,
take Friday now.
I'm like, look, if you need to leave, you go family first,
it's a principal here, never miss a concert, a recital,
a science fair.
But we need to be together.
Why?
Because there's so many interstitial moments.
There's a chance serendipity moment
because I come out of a meeting
and I'm able to introduce you to grace
or brand that is meeting with somebody
and every day, hey, do you have a minute,
you know, knock, knock, and then somebody's making an introduction
and you just never know what it unlocks.
That never happens on Zoom or on calls.
It just doesn't.
The structure of that doesn't allow
for that serendipity and those sort of human connections.
The ability to really feel when somebody swallows
when you ask them a question
and they're feeling nervous or like, hey, like,
is something going on?
And they're, because everybody's fighting some epic battle.
You know, they've got relationship issues,
they've got parents issues,
they got a sick person,
and we are just, we're still human.
So I feel deeply that we should all be connected in person,
and I do think that that's an advantage
in a world of abundant AI
and sort of cold, sterile,
even if it has the simulacrum of,
and again, a lot of people will use AI's
in a beneficial way to share things
that they might not even be comfortable sharing with a person
and have a consultant or therapist to help them.
But I'll give another example,
which is another colleague here
moved from one city to another
and he happens to be religious observant
but is an atheist, okay?
But he moved to this new city
and he found a tribe
that he's like, I immediately plugged in.
Like we didn't know anybody,
we didn't have any friends,
but by being part of this religion,
we instantly had friends and peers
and I was like, not that it's cynical or selfish
or, you know, we could put a valence of meaning behind it,
but really at the end of it
is like, will you help me?
Is there reciprocity?
And that's this like ancient sense
of whether it's transactional or not,
it's overt or not,
you're signaling the depth of the sacrifices
that you would make for the group.
This idea that something is bigger than yourself,
belonging is arguably like the most pleasurable thing.
Having friendships, you know,
you do all these, look at all these studies
of people that age and what was meaningful to them
and leading a good life.
And being ostracized or feeling rejected
or left as like the most painful thing.
so I think that that's a timeless human truth
and I certainly feel like
and I encourage this with my kids by the way
I don't want them just to have a group of friends in school
because just like the diversity that you need in a portfolio
you need to have hedges and all these other things
because maybe something is not going right in that friend group
and then you are more at risk of catastrophizing
that oh my God like nobody likes me and you know
and so if you're in a soccer team
and you go to a religious group
or you're in Hebrew school
or you're on a dance team
and you have a neighborhood group of friends
and my kids are in like six different things
outside of school each
and they then can bring these people together
which itself adds this feeling of like
oh I connected people and I'm this node
and it sort of cements
the network in a way
that is profound and meaningful
and comforting.
I like that.
I want to come to aging a little bit here.
I don't want to do it.
I don't want to age.
Talk to me about this because
I feel like based on the research happening now, the amount of money going into this, the progress
that we seem to be making, at least on biomarkers that we understand, we seem to be able to
dramatically slow at best our aging process. I think we're going to make a quantum leap in
term of average lifespan for humans, maybe adding 10 or 15 or 20 good years in the next 10 years?
I think that's possible, 10 or 15 or 20, not doubling. I mean, we did that, you know,
in a few generations, you know, people would die at 40 years old or 50 years old and people
now regularly until their 70s or 80s. You know, this is an interesting thing because I'm,
we don't fund longevity work here. And I'm personally not invested in any of these things.
I will go to my doctor and I will get my blood tests and he will suggest that I take some
supplements because I'm low in iron or this or that. And that's entirely reasonable just to
maintain sort of homeostatic function.
But I don't go absolutely crazy and intense.
But I appreciate the people, Brian Johnson and others,
that are self-hacking and doing this and pursued.
And Ray Kurzweil was doing it back in the day.
Nobody has really seen Ray out that much.
You know, he was taking like, I don't know, 100 supplements a day or something like that.
And last I saw, I think he had a dupe and, like, it was like a messy situation.
But I'm glad that these people are doing it both in pursuit of staving off their own mortality
and a public service either interpreted it
as I can't believe you go to these extremes
and I'm not going to do that
because it's super stressful
or maybe you're going to unearth something
and we're all going to be on metformin
and all these, you know.
But there are this timeless pursuit of avoiding death.
You know, it's very human.
It is.
And it goes back like the first form of avoiding death
was, I'm not going to die.
So to search for the fountain of youth
in Ponce de Leon, today modern pharmaceuticals
and drugs and supplements.
and lifestyle changes.
The second form was, fine, I'm going to die,
but I'm going to come back.
And so reincarnation.
And maybe that was the spiritual or religious sense.
And then today's version of that would be Alcor or any of these cryo.
I'm going to freeze my brain so that when they figure this out,
they'll bring me back.
The third was, okay, fine, I'm going to die,
but I am more than my physical self.
There's an ethereal, a soul.
The modern technological version would be the ghost in the machine,
endless sci-fi movies about people uploading themselves and their likeness, which, by the way,
are sort of a really interesting phenomenon and how we deal with loss and the ability,
if there is an AI that is totally trained on my voice and my likeness and everything I've
ever said, which I have done, that my kids would actually have a dad AI and maybe they can
consult it for questions or is that a good thing or a bad thing? I don't know, but it is going to
be a thing. Yeah. Then you have, okay, I'm going to die, but I'm going to live on through my
progeny through my children through my genes, which is the evolutionary impetus. And I'm going to
live on through my works. And you won't be there to experience either of those things unlike the first
three where you're not going to die or you're going to come back. And so I think about the people that
whether it's where I grew up in Coney Island, Brooklyn, they put graffiti on the wall and they put themselves
up until it gets washed away. Or if you're Dave Rubinstein or Steve Schwartzman, you put your graffiti
etched in stone on the New York Public Library, but it's really no different, just $100 million
dollars instead of free and the potential for being jailed.
And then it's through your children.
And I think there it's very Buffett-like.
The moral sort of mandate would be like he described Don Quio of Coke that when you
do die, you want them to say what they said about him, which was everybody loved him.
I don't have that.
Like, not everybody loves me.
But my kids, you know, is most important.
I think the theory was the people that you want to love you, love you.
Which is interesting because the people that we celebrate, the most.
If you think about Steve Jobs, and even Elon, like, are the people that are closest to him.
I don't mean millions of fans that don't actually know him.
But I used to have this debate with one of my best friends about Steve Jobs.
Like, the world loves Steve Jobs.
Yeah.
But like...
People in his orbit didn't always love Steve Jobs.
He's, like, terrible, you know?
Like, he's so mean or, like...
So I think that's really interesting.
But going back, the common thing amongst all the people that have tried to defeat death
from the people that weren't going to die through, found of you.
youth or modern biotech, the people that were going to upload themselves, the people that were
going to come back, the people that leave it, you know, to their kids or to their works, the common
thing amongst all them is that they're dead. You know, nobody has beaten death. And so the mental
model that I like on this is, okay, take a piece of paper and put the day you were born on the front
and the day you're going to die roughly plus 80 years on the back. And the only thing that you may
have control over in part is the story that you write between these two pages. And my brother-in-law
a pass when he was early 40s, stomach cancer, you know, he lived a tragic short tome, you know,
relative to others. But maybe you get to live this epic tome. And it's like, how do you write it?
And who do you spend your time with? And nobody's going to look back and say, you know, I wish I would
have taken that extra meeting or done that extra business trip or something. It's like, I'm really
glad that I was there for my kids or my spouse. And yet you work really hard. Yeah. And I always
prioritize my kids. Like, I think about all the time, like their judgment. You know, some people are like,
meet their maker or the you know because I don't believe I care deeply will they say my dad was there
for everything and in part for me because my father was not present in my life my parents split when
I was super young it's my little guy who's I have two daughters and a son but my son who's nine
always wants me to have a play date with my dad you know and and we're civil and we speak a few times
a year but I'm like it's just not that relationship I'm not going to have that you know and he's
like yeah but I really want you guys to I don't know I get to be the dad that I am to you
because I didn't really have that
and I'm making up for it now
and this is what it is.
And then I worry that
if they see such a present father,
do they take that for granted?
Dude.
And then do they screw it up
in the next generation?
I have the same thoughts.
I actually talked to a therapist with this
because I was like,
am I too present in their lives?
Like, they need some space.
You know, I'm home
and they get home from after school.
And my wife and I talk about this,
like my parents split.
My father was married four times.
All I wanted was a stable.
nuclear family. But if my kids grow up
in a stable nuclear family, do they take it for granted
and does, like, one of them become a cheater and
infidelitist and all this kind of stuff?
Because they, and I have no idea.
But I know for me what is meaningful and what
makes me feel good and is a totally selfish
thing that is about solving for what I want, but
selflessly, I think it ends up being virtuous.
So what would your top, like, three or four
priorities be then if you were to outline them?
My kids and my wife, call it family, number one.
I mean, you know, you think about like the people that
lost everything physically and materialistically and the fires recently in LA like family is like
the you know and I don't care over time as they grow up and where they are and they're you know
but that to me is like the most important thing number two is purpose and meaning I think this is
a universal thing but I feel lucky that I enjoy what I do it's an intellectual puzzle there's times of like
great fierce competition we're losing to you know I don't like to lose we're losing out to
another firm that's got an entrepreneur. I like the intellectual gratification of being right when
other people are wrong. I'm very intellectually competitive that way and discovering something that
people haven't discovered. I always talk about Linus Pauling the double Nobel laureate who wanted
for both chemistry and peace. And he has this quote about science, which I just absolutely love. And this
is like, I hope until I'm 90 or 100 or 110 or whatever modern science lets me live till that I continue to
have this because it's an addictive feeling, which is that I know something that nobody else knows
and they won't know until I tell them. And I love that, discovering a legal secret and knowing that
this is going to be announced that the scientific breakthrough is coming and nobody else knows about it
yet. So that to me is like meaning and purpose. It's intellectually competitive. And I understand
the intellectual competitiveness. I want to be right. I want to make money. I want the credit for it.
but it really is about like the status,
because otherwise you would just do these things in private
and totally in quiet,
but I like that feeling of that,
even if it's vinglorious and ego and all vanity.
So family, that sense of purpose
driven by this intellectual competitiveness.
And then I think I didn't appreciate this as much,
but it's sort of adjacent to the first.
There's a handful of people who I imagine myself, like, retiring with
or like guy friends and people that I enjoy spending time with
that have the same sort of values,
and they're very family-driven.
And so my cousin in particular,
this amazing guy, Jason Redless,
one of my wife's best friends,
this woman, Molly Carmel.
They're both, we call it Framily.
They're like friends and family.
But, yeah, it's a powerful thing
that I don't want to ever lose.
That's awesome.
You process a ton of information.
What's that workflow like
for information to get to you?
How are you using technology to filter?
How are you filtering information?
So I typically go to bed between 12 and 12.30, wake up around seven. Kids, before they leave for school, about 40 minutes. I do a lot of physical activity, usually three days a week between working out, trainer, jiu-jitsu, all kinds of interesting stuff. But probably about an hour to an hour and a half in the mornings of reading through something like 40 different papers now. It used to be like seven, but then when I start to travel internationally, if I went to the mid-east, I would find anything.
version of some of the key papers, same thing in Japan and elsewhere.
And so now I read a lot of international papers.
And when I say read, I use an app called Press Reader.
It has the digital replica of this specific version, which I really value.
And I know we've talked about this in the past, but I like to know what the editor put on
C-22 that's not as visible on the website because there's meta-information that the editor
is saying, this is not important to be on the front page.
But if I disagree, and I think that there's a magnitude of informational importance,
that to me is like some sort of edge.
then I will take screenshots of those
and so I will sort of
call it scout and scour
through all these papers, take screenshots
in some cases I may even
take those screenshots and put them into
an AI and basically say
give me the summary of this article
or give me the three key quotes that really matter
and so I'll go down
all kinds of like rabbit holes with that
so that's the first which is just like call it
24 hours worth of information
that is basically put into editorial decree
you can usually get the FT
in New York time
like four or five, six p.m. Eastern time.
And so you have a little bit of information edge
because most people don't get the FT
for another 12 or 14 hours
but they don't know that they could get it online
but that thing is valuable.
And I care about all those things
including not the sophisticated newspapers
but like the less sophisticated ones
like USA Today and I want to know
what is the average person going to read
when they wake up in a Marriott
and get the paper delivered under their door
and that kind of stuff.
Then Twitter, you know, I have all kinds of lists
that I follow and at a given time
it might be something that is geopolitical
and war-related, where I'm going down a deep hole, or sometimes it's AI and technology,
sometimes it's my team and what they're posting and reading about, but a lot on Twitter,
which I find truly invaluable. I mean, I know a lot of people say it's like this dark cess
pool of whatever, but you can just filter through and cut out the people. I'm muting and blocking
people all the time, and I'm discovering all kinds of just absolutely incredible people.
It has been, as you know, from one of our first conversations, like this idea of randomness
and optionality. It is this huge randomness generated, this huge optionality generator,
and the accessibility and I just I absolutely love it so that's another thing where I'm really rooting for Elon
in the continued success of X because I'll continue to pay a lot of money for it and I pay more money for it but I find it super valuable and it's real-time pulse and I'm excited for GROC to continue because I think that GROC and X just continued to sort of like that's a repository we talked about that before
repository of information one of the first things Elon did was cut off access to Google from the data and I and I think that's a right move
This is our platform, same way as meta has.
So I think that that repository of everybody's tweets and retweets and likes and the comments that they've made,
you can already go on and do this in sort of a relatively superficial way where you can say,
roast me.
And it will basically off your past, I don't know, like 25 or 30 tweets sort of roast you based on what you've tweeted about.
But there are longitudinal access of people that have, I don't know, 10,000, 100,000 tweets is an amazing pastiche of like, you know, what.
there's an interesting thing here
which we were just riffing on internally
which I'll come back to
just remind me on sort of wrapping yourself
in this information mosaic
and breaking free from it
but papers in the morning
Twitter internal slack
emails
texts you know just like processing all this
information I use rewind
on my Mac
which is effectively doing
nonstop screen capture
and there will be other tools like this
in part because I do not remember the source
when I saw information
It's sort of the same thing of, like, if you see a show and you were to ask me today, like, where did you, I don't know, I was on Apple TV, like, was it on Paramount? Was it on CBS? Was it on Netflix? Like, I have no idea, right? And in fact, usually when I do the Apple search and it doesn't show up, it means it's on Netflix because it can't search Netflix, right? Yeah, just huge information omnivore, everything. And then there's some, you know, random writer that I follow that's like Catherine Schultz at the New Yorker, Adam Gopnik, and people whose style of writing and the selection of their,
their subjects I find really interesting, and then I'll go deep into some of their themes.
So you use Rewind, Pressreader. What are the other, like, technological tools that you're
finding super valuable? Every AI. I might take an essay, read it, ask it to summarize the key points,
ask it to put it in different voices, take two different essays and say, where do these things
agree or disagree? And so, yeah, like just nonstop. I'm on AI easily more than Google now,
but, I don't know, two, three hours a day.
What have you learned about prompting
that would help everybody get better results?
Usually very specific.
Like, I give it a priming thing like you are the world's,
it's a neuroscience paper.
You are the world's greatest expert in neuroscience.
You have read every paper that has been published.
You have both a skeptical eye to new claims,
but you are also open-minded to interesting correlations
that might not have been considered.
Read this paper and, you know,
me the three most provocative, non-obvious points and give me the three cliches, you know,
and so just, and by the way, I will put them into three different models at the same time.
So I will open three different browsers, you know, arrange them and put it into chat GPT,
put it into Claude, put it into one of the perplexi models that's not running on those two.
And, you know, sometimes I'll mix and match them.
I love that.
Yeah, it's sort of like a palette of, you know, mixing.
We have not yet done this as a partnership, but we've talked about it.
having an AI partner.
Summer's here, and you can now get almost anything you need for your sunny days,
delivered with Uber Eats.
What do we mean by almost?
Well, you can't get a well-groom lawn delivered,
but you can get a chicken parmesan delivered.
A cabana? That's a no.
But a banana, that's a yes.
A nice tan, sorry, nope.
But a box fan, happily, yes.
A day of sunshine? No.
A box of fine wines?
Yes. Uber Eats can definitely get you that.
Get almost, almost anything delivered with Uber Eats.
Order now.
Alcohol and select markets.
its product availability may vary by Regency app for details.
At Grey Goose, we believe that pleasure is a necessity.
That's why we craft the world's number one premium vodka in France,
using only three of the finest natural ingredients,
French winter wheat, water from Jean-Sac and yeast.
With Grey Goose, we invite you to live in the moment and make time wait.
Sip responsibly.
There's still a behavioral discomfort about recording conversations.
You and I are recording our conversation now.
But every partnership discussion we have, if we were confident that it was protected and encrypted, because we might say things that could be harmful.
You don't want them coming out.
It could insult somebody or like, or we have a piece of intel that we don't want out.
But if we were comfortable that it was perfectly private, which is a hard thing to promise.
But if it was, you would have.
a repository of every conversation we've ever had over the past X number of years, the decisions that we wrestled with, you would be able to have somebody to advise us, an AI to advise us, where are we showing biases in inconsistency between a decision we made three months ago and this? What is different at this time? Which voices are not speaking up? You know, and you can already get this in some cases with like certain Zoom calls or other recording things where it'll tell you who spoke for how long. And then you could run like a Bayesian analysis of, okay, given that we're looking
at these two companies, give me the outside view, the base rate of success historically,
which in venture honestly doesn't matter, but, and then give me a Kelly criterion of how you
might size this based on the projected internal confidence. And so there's all kinds of things
that we could internally do to use these tools, which I think over time will probably
experiment with. But the biggest thing is basically having like a capture of everything, you know,
everything that you see, everything that you hear, everything that my, I've already given over
again to the privacy gods, everything that my screen sees. And so, I'm sorry, I've already given over again to
the privacy gods, everything that my screen sees. And so I trust that that's siloed on my device.
It's not going to the cloud. But it's super helpful when I'm trying to search for something.
I'm like, was that a Gmail? Was that a text? Was that a thing on Twitter? Was that a PDF I read?
Where did I see that? And the ability to DVR my life is super valuable. If I could do that
with my conversations, like, who said that the other day? In fact, Lauren and I just had somebody
over, you know, we host people at our house and we couldn't remember who told this this thing.
And we were like, I had to go through my calendar to see who was over on Thursday or Friday.
Yeah, okay, it was, you know, but being able to search your life instantly, I think it's
going to be a generational change in the same way that people were not comfortable, you know,
posting on Facebook, and then they were comfortable, and then, like, now people are, like,
posting themselves in swimsuits and bikinis, and it just doesn't matter.
That, to me, is going to be a big step change.
I want to come back to the infomeraseic, but one thing, we never talked about YouTube
being, like, such a huge data source.
Incredible.
Closed and slightly open, I guess.
in some ways.
Yeah.
Like,
well,
I love that moment
when I think it was mirror
from Open AI
was asked like,
you know,
so did you train?
And she said,
you want to ask,
you know,
crickets.
Yeah.
Okay,
the info mosaic
and breaking free
from sort of like,
one thing I do love
about X is that
it shows you
views that are contrary
to your own,
like the algorithm's gotten
pretty good at.
Yes.
And there are,
what is it,
ground news
that you can sort of do this
where it will actually
give you
a bias
on, you know, certain things, and it'll give you both sides of the view.
So if you truly are objective and, like, truly knowledge seeking, then you would want to
experience that.
And I feel like that will be an option that you just click and enable a feature and, you know,
it's able to identify some of the biases and whatnot.
This idea of the information mosaic was a recent conversation I was having with my colleague
Danny Crichton, who runs like our risk gaming stuff where we're coming up with all kinds
of crazy scenarios and imagining these low probability, high magnitude events.
And the idea was that over time, this perfect simulacrum of Shane or of Josh,
is going to exist.
Everything that I've ever said on every podcast,
everything I've ever written publicly,
forget about my private thoughts,
but just everything that I'm out there publicly,
my voice, my tone, okay.
And so I almost imagined it
like this Matrix-like mosaic,
like a Spider-Man costume
that's like form-fitting.
It's me or a close approximation of me.
But what if you want to break free from that?
In a sense, if I said,
give me something in the style of Shane Parrish,
it might conjure something in the style of Shane Parrish,
but in the style of Josh Wolf
or in the style of David Milch,
or Christopher Hitchens, you know, I actually love invoking dead voices, you know, to sort of bring
them back from the dead, right, and have them opine on the topic. What would Christopher Hitchens
say about this article, blah, blah, blah. But what if I wanted to break free stylistically?
If I said, give me a image of a horse in Tribeca in the style of Wes Anderson. You know,
I can imagine the pastel pallets that it would conjure and you can imagine that, too, with the, you know,
rectilinear framing and whatever. But what if,
Wes Anderson suddenly had like a new
stylistic change in his
ovar and wanted to just shift like
he'd be constrained you know in the same way
that people hate when
you know I don't know maybe when Dylan win electric
or like you know somebody else changes their
style or their genre and so there's
this aspect where AI
constrains you
and
just sort of playing with this idea of
you know how do you break free
in the same way that there might be like the right to
be forgotten that maybe you want to
change your style. The great virtue of college for most people is this quartet of years where you
can break free from who you were for the past four years. And nobody knows who you were and what you
cared about. And maybe you were into heavy metal, but you were in like the band, you know,
and you couldn't break free. Or maybe you were gay and nobody knew or all these things that you
can just suddenly like be yourself and explore new things. And there's this element where the great
virtue of college is self-experation against the constraints of high school, but
could AI be this constraining force? Because the more content that you put into it, the more
knows you, the more you may have trouble varying from it. And so there's something interesting
there. I like that a lot. Let's talk military and technology and you guys are big investors in
Andrew. Where's that going in the future? Well, there's going to be a lot more brilliant minds,
I think that feel comfortable, motivated not only by a sense of purpose, patriotism, but also principle and capital making that they see the things that they doubted early on.
Like, why is this time different, another defense company of which there weren't very many?
But seeing Andrew's ascendancy and valuation and success and program wins, I think has inspired a lot of people like, wait, there's something going on here.
We went from 50 primes down to five.
You're seeing the rise of these neoprimes.
I deeply believe that Anderl in the next few years will be a $30 to $50 billion publicly traded business doing single digit mid billions of revenue with software like margins that are not like these cost plus margins.
So that is going to usher in a big wave and they're buying companies that are acquiring smaller businesses.
But you'll continue to see that sort of evolution in a world that people realize is not Kumbaya peace and safety.
There are bad actors that when we take a step back or on our back foot or a little bit permissive that they arm up.
It happened with Iran.
And I think the prior administrations from Obama and Biden were well-intentioned in trying to bring them into the Western world.
But, you know, it was a sort of ruse, you know, from an Iran standpoint.
Same thing with Gaza and Israel and Russia and who thought that we were going to see a land war in the 21st century where Russia would invade you.
Ukraine. And China and Taiwan and North Korea and the African continent, as we talked about,
and the Sahel and McGrath.
Infiltration of a lot of these groups into South America. I mean, there's just lots of conflict waiting.
And the best way to avoid conflict is to have deterrence. And if Ukraine had nuclear weapons,
Putin wouldn't have invaded. Most of the West and NATO really said, don't worry,
we got your back. And even though you're not part of NATO, and they never nuclear.
arised. I think the world
timelessly,
you know, through all of human history
is going to face
enormous conflict,
resource wars, water may be
next. I think there's something like
1900 active
conflicts around the world, around water
rights. You look at China and Pakistan
control of the water. I mean, there's just like a lot
of resources.
You look at
disrupting undersea cables,
you know, sabotage.
efforts. You look at deep sea mining. You look at space as another frontier. There's just a lot of
opportunities for zero-sum conflict. And when you can't reconcile those conflicts through diplomacy or
negotiations or agreement, it goes to violence. And the people that can bring or affect or export
violence typically have the upper hand. And part of what has made this country great and made it
powerful made it the economic juggernaut is that it is allowing for the low entropy system even
though the country at times seems chaotic that allows for the high entropy production of
entrepreneurial ideas and and free market capitalism and booms and busts is that we have the most
powerful military on the planet you could argue that that didn't just benefit the United States
it benefited Canada Europe it benefited a lot of Mexico or allies for sure you can watch
many fictional movies that have run these counterfactuals of what would have happened, you know, if Nazi Germany had won or, you know, the Russians had landed in like for all mankind, you know, before we did in the moon.
But we're getting away from an era of like, here's a trillion dollar, you know, boat effectively.
It depends who you talk to.
Shouldn't we?
Like, I mean, if that boat can be taken out by a $3,000 drone, how effective is it?
Yes, for sure, the asymmetry of a threat of an aircraft.
carrier, uh, against a large fleet of drones. It is very much, if you talk to Sam Paparo,
who's the head of Indo-Pacom, he will say it is all about mass on target. There's certain things that,
you know, automation cannot, you know, do. And he wants what he calls, which I guess is a technical
term, a hellscape in that region, uh, the Taiwan Straits and South China Sea. And so that you make it
really, um, impossible for them to have sort of any military dominance. Um,
But it is an era where it's about, you saw this again with Iran and Israel and Gaza and Syria's missiles and counter missiles and rockets and intercontinental ballistic missiles and hypersonics and space weapons.
It is just about going back to almost like Planet of the Apes, you know, one ape through another, a rocket another or a twig or a stone.
Yeah, the weapons get more powerful, but the behavior doesn't change.
We're back to throwing projectiles at each other, you know, and it's just they're automated.
at speeds or at levels of attritable, overwhelming defensive forces, that that is the battlefield.
Do you think values become a disadvantage in some ways then?
Like, for example, if the United States were, oh, we need a human operator to pull the trigger,
and another country was, no, it can be completely automated.
And therefore, in a dog fight, we're likely to more win.
Look, this is already happening in the information space where we have certain,
and in the autonomous space.
I was in the Pacific region with Socom
and there's a drone operator
who's flying the drone.
There's another drone operator
who's piloting the weapon system
and there's two lawyers.
So they're helping the commander
who is effectively given like a godshot
of how many combatants and civilians
can be killed in what ratio
and sometimes it's like five to one or 10 to one.
But there's lawyers that can
authorize because we have a certain rule of engagement that frankly gives these military personnel
the ethical comfort that this is a superior system. But for sure, if there are people that don't have
that same moral code, in some cases they can be at least temporarily advantaged.
Well, you can think of that through AI too, not just military, right? If we restrict, we put restrictions
on any technology in another country doesn't. Sometimes that can cause an advantage to another country.
China has the 50 cent army, you know, these people are getting, you know, 50 cents for every tweet and information they put out.
The State Department, when they want to tweet something out through groups, there's literally like a disclaimer that says, and it's like one woman in Tampa that's doing this, like, this was sponsored by the state.
So we have these ethical restrictions which definitely tie our hands behind our back in some cases.
And our enemies will always try to weaponize this.
So, I mean, you can look at many vectors today that don't seem like a thread vector.
but they have been weaponized.
Social media information, we know.
And the best fix for that is identifying the bad actors
and also inoculating people with a heightened degree of skepticism.
But the vast majority of the American population
will not be inoculated.
They will see the things that they want to see.
They will follow the accounts that they want to follow.
And then occasionally those accounts
will start to pepper in other information
that they want people to believe.
And that's how information cascades can go.
We have open systems, immigration.
You're seeing a lot of the rise
of the populist anti-immigrant movement in part
because in some cases it's a result of good intentions
of providing sanctuary cities
and wanting to help people and provide amnesty
and help immigrants come here
because that's what our country was built on
and then you want those people to assimilate
and when they're not assimilating.
But then you also have bad actors like Putin
who has weaponized immigration
and put migrants on people's borders
to create pressures so that you can get a political movement
from inside the country
that will be sympathetic to the nationalist sensibilities
and he's orchestrated that very well.
And so infiltration into our university systems,
which accept foreign capital
and you see Qatar that is influenced
very massively domestic U.S. universities.
China doesn't allow that.
You know, U.S. is not able to come in
and sponsor Chinese universities.
TikTok, of course, a huge one, right?
We banned it years ago,
right when it had become TikTok for musically,
in part because at the time,
I seemed like a conspiratorial nut saying,
I don't trust this
with the Chinese Communist Party
having control over this.
And then behind the scenes, myself and many others have played a role in helping to orchestrate what I hope will not be thwarted by Trump to see this divested.
I have no problem with people using TikTok, but it should not be in the hands of the algorithm of control of the Chinese Communist Party.
And it's on a lot of government phones.
It's insane.
It's really interesting.
Why haven't we seen more isolated attacks that are cheap and using technology?
And what I mean by that is, you know, a nefarious actor can probably probably for two or three grand.
and effectively plot an assassination?
I guess the question would be to what end?
And so you still have to realize that many people,
even if they're like evil geniuses,
have an objective in mind.
And do they just want to sow chaos
and create distrust in a system
and have people scrambling,
whereas there are an opportune time to strike?
Think about the Israel operation
with the Biber plan.
This was 10 years in the making.
Now, they could have done it
at any point in time five years ago,
but they waited until a person,
ice moment. And so being able to do the thing and deciding when you do the thing are two
different decisions. But I think that we've been warned for a very long time about hacking and
infiltration into our physical infrastructure. For sure, somebody could shut down air traffic
control. And what we saw just recently between the Black Hawk helicopter and this regional
plane from, was it Kansas City, you know, crash in Washington, D.C. You could see the FAA shutdown,
you know, and have a glitch. You can have
infiltration into our banking system and you know just like the sony hack right the big thing with
the sony hack was not that the systems were disabled it's that information was revealed you want to
create civil war in this country just just reveal everybody's emails for the past year the things that
we've said about each other you know i mean that like reveal truth in a sense right it was the great
irony so the obfuscation of these things in private helped to create a civil society uh our water
systems our infrastructure our traffic lights you know i mean all the things that you've seen in
sci-fi movies when like things just start breaking i'm actually amazed that our infrastructure
globally but you know even in new york city even in this office there's a million skews in
this office you know above our heads right now there's an h-fax system like the fact that we trust
this system and it's not going to fall and explode or blow up like and then we're shocked when these
things do but i'm constantly amazed that the entropy the forces of entropy are constrained by either really
good engineering or inspection of systems or whatever it might be the maintenance of systems,
which is another really interesting thing, this idea of maintenance.
Like the past 10 years have been all about growth, growth, growth, growth.
You go to a financial statement on CAPEX, you've got growth and you've got maintenance.
And I think in a world where rising cost of capital keeps going up for variety of reasons,
I think there's a ton of dry powder and venture capital and private equity, but a lot of it I call
wet powder because this money is basically reserved for.
companies and people don't realize, reshoring, all of these things, you know, tariffs,
they're going to be inflationary, there would be a rising cost of capital. If you have a rising
cost of capital, if you are a CFO where you're on a board and you're thinking about good
governance and capital allocation, we're not buying the next new, new hot thing unless we really
have to, like AI today. Yeah. We're thinking about how do we maintain the existing assets we have?
And those assets could be satellites up in space. They could be military installations. They could
be our telecom infrastructure, our bridges, our waterways, sanitation processing, our HVAC systems,
our industrial systems, all those things you need to be maintained. So I'm increasingly interested
in new technologies. This could be software, services, sensors, all kinds of things that can help
apply to old systems to maintain them for longer, depreciate them for longer, let them last for
longer. I think there's going to be increasing demand for maintenance of systems. But I'm amazed
that everything around us is just not constantly breaking. It truly is like miraculous.
is when you think about it, right?
Yeah.
What do you think at Doge?
The currency or the initiative?
The initiative.
I think it's a virtuous thing because it's shining a spotlight on a lot of things that were just done because they were done and you get this bloat.
Or in some cases there was like overt obfuscation.
So I think sunlight heals all and putting a spotlight on ridiculous spending or ridiculous inefficient things.
you know, I will say I grew up sort of a center-left Democrat my entire life.
The first time you go to the DMV, you become a Republican.
Like, you know, it's just like you want systems that have competition because competition makes things better because if you have a monopoly on something, you don't have to improve.
If there's one regional carrier for an airline, if there's one restaurant, if there's one place you have to go to for your passporter, you don't want that sort of centralized control because the service is going to suck because they don't have to do any better.
So I think if you can put a spotlight on excess and waste and bureaucracy and at least begin the
conversations at a bare minimum of, wait, we're spending how much money on what?
I think that that's a virtuous thing.
Whether or not these things will be effective at really reducing costs, TBD, but it actually
seems quite positive that they may hit some of their targets of trying to reach, what is it,
a billion a day or more, and if that could end up reducing the deficit by 10,
percent or 20 percent, let alone 50 percent, going from $2 trillion to a trillion, would be
incredible.
So whatever the motivation, I don't believe it's patriotism, it could be intellectually competitive,
it could be power, whatever the motivation, I think that the means to the end, I think
the end is a virtuous pursuit.
If you were to take over a country effectively and you were in charge of policies and
regulations, what would you do to attract capital and become competitive over the next 20 or 30
years what sort of things would you implement what would you get away from and not do well i have an
adjacent answer of if i were secretary state or secretary deaf for the day sure which i'll give you first
which is i would really put priority on africa as a continent and particularly that's a helen
mcgris because i do believe between violent extremist russian mercenaries china infrastructure
you are one terror event away projected into europe that creates the next afghanistan and suddenly
nato and the u.s are in there dealing with isis you're already seeing you know the first authorized
strike by Trump on ISIS in Somalia.
I saw that today.
And so you've got Sudan, Chad, Mali, Niger.
Like, it is just a hotbed of people that were coming from Syria and Afghanistan,
Islamic extremists.
It is a bad situation.
And I think that we should be proactive there before we have to be reactive.
And it's a lot more costly in lives and money and blood and treasure.
The second thing I would do is a hemispheric hegemony declaration.
I just went to Nicaragua, Nicaragua, for a variety of reasons.
we went instead of Costa Rica, but I felt much safer in Costa Rica.
And I was worried that I was not going to be able to leave Nicaragua.
And we went with a friend who happens to be a prominent journalist.
He was not allowed entry into the country.
So it really threw our family vacation, his family of five, my family of five for a wrench,
because the government is trying to take over the banking system
and they don't want it to be covered by financial journalists and these kinds of things.
And you look at who is in there and you literally have presence from Hezbollah, from China,
a CCP from Iran. It's a bad situation. The places that we think in most of Central
Caribbean, Central America, Caribbean, South America are vacation spots where we get our coffee
and we go on a nice vacation, massive infiltration from adversaries. And so I think we are
losing the game and I would declare almost like a new Monroe Doctrine kind of thing where we say
the entire Western Hemisphere, you've got a billion people, both ability to project
in the Pacific and the Atlantic.
You've got mostly English and Spanish-speaking people,
say for Brazil with Portuguese.
You've got a ton of resources,
a ton of brilliant educated doctors and whatnot.
And I would just shore up this hemisphere,
particularly against Kronk, you know, China, Russia, Iran, North Korea,
and their influence.
If we were worried about, like, Cuba and the Cuba missile crisis
and things proximate 90 miles or what from Florida,
I think that China is doing very smart strategic things.
So us going back in and saying,
we're going to, you know, reclaim the Panama Canal and our influence on it, like, you know,
forgetting about provocations of Mexico of, like, you know, the Gulf of America versus
Gulf of Mexico, but I think that having influence in that region is really important.
So those would be the two things as sect-f or sex state that I would do is declare
hemispheric hegemony and make sure that we shore up our allies in the region and get out
our adversaries and their presence, in part because there's so much commerce and money and
infrastructure that's going in, and then focus on the Sahel and McGrub in Africa.
For country competition, brain drain, you want the best in the problem.
brightest. You need to fund basic research and basic science. It should be undirected. That's the serendipity
and the randomness and the optionality that leads to great breakthroughs. You want capital markets to
be these low entropy carriers for high entropy entrepreneurial surprise. So predictable rules and
regulations, I would lower, I don't know why we don't have a flat tax. I mean, I know why we
don't, but I would just have a flat tax, make it super simple. Rich people are going to scout around and
figure out how to get around the tax system anyway and poor people are burdened by it. I get progressive
versus regressive, but I would just simplify our tax scheme massively. Anybody that is coming here
and getting an education in this country, I would staple a visa as long as they stay here,
work for American company for at least five years, let them become, you know, we want the best
and the brightest tier. We are, as an example again, and I don't mean to make this like all
China, but they are our most dominant adversary. 50% of all AI undergrads in the world today
are being graduated by China. In our own country, in the United States, 38%
of researchers in AI are from China.
So we're outnumbered even domestically.
It's a big deal.
And we used to attract something like 23%
of all foreign graduates here.
That's down to 15%.
People are either going to other countries
or staying in their own country.
And so we need that.
That's what won us World War II.
You know, if Einstein would have stayed in Germany or...
What causes that to happen?
Is it the tax rate?
Is it opportunities available?
Is it housing costs?
Like, what are the factors that go in
to people leaving.
Well, start with the attracting part.
You know, as Walter Ristin said, people go where they're welcome and stay where they're well-treated.
So we should be welcoming.
Now there's a debate about immigration and, you know, we should distinguish between, you know, the bad people and, like, brilliant people.
And we should want them here.
That just comes down to, like, basic vetting.
Right.
But, you know, some of that is exploited, you know, a lot of these consultants with WIPRO and some of the, you know, Indian business process.
about BPO is the business process operations or outsourcing. Housing is a difficult one,
but people can always figure out. New York City is expensive, but you can live in Long Island
City in Brooklyn and Queens. But I think, yeah, housing availability. I mean, our cities are
so rich and full with culture and people, and particularly if you're young person, you want to be
around the density of that because you're trying to find peers and a mate and, you know, all of that.
Even if you're from another country, you go to New York, you can find your enclave of
Korean or Chinese or Russian or Ukrainian or Israeli and Caribbean. It's all here. So I think just
having a culture that embraces this and encourages it, you already have a robust venture capital
system of risk taking. Many other countries don't have that. So that's another thing.
But if you were to design a system from scratch, you want openness with security. So some means
of vetting. You want a great education system that can attract people that view it as a status
to have graduated from that particular school,
and people want to be around people,
whether this is in a company or in a country
that are like them, that are competitive
and highly intellectual,
that they respect or admire or want to compete with.
So that's number two.
Their work, something we did in the 1980s,
I think it was in 1980, was the Bidol Act,
where government funding for research
would allow the university
and the principal investigator
to actually own the intellectual property
that became an asset.
That asset could be licensed to a company.
It absolutely opened the floodgates
for venture capital to be able to commercialize that.
That happened to coincide with ERISA
and allowing retirement plans to go into venture capital.
So now you have a pool of risk capital,
which you need for taking risk on unseasoned people
and unseasoned companies.
And then you need a robust capital market system
to be able to continue to allocate money.
But again, capital goes where it's welcome,
stays where it's well treated,
true of human capital, true of financial capital.
And so a rules-based system, a strong military,
you know, if you're starting country from scratch,
you're not going to have that, but you need great allies then. Think about Singapore. Yeah,
I think that's a phenomenal model. Singapore is a great model. That's awesome. I hope some of the
government people we have are listening to this. I want to come to IP for a second and copyright
and then wrap it up here. So do you think that AI should be able to create IP or copyrighted
material? Like if I tell AI to write a book, should that be copyrightable? And who owns the copyright,
the AI or me for the prompt? It's super complicated because, you know, the first,
first debate about this, which is the great irony, right? Because open AI investors and stakeholders
were up in arms that R1 stole from opening I, but you can make the argument that Open
AI has trained on the repository of like the public internet and, you know, every art that's
ever been produced and whatnot now, if you were an art student and you went to the Louvre or to the
Met or to MoMA and you sat there and studied it or took a picture of it, and then we learned
through copying. We learned from mimicry and imitation and we remix these things. And, you know,
There's this great, what is his name, Kirby, who did everything as a remix.
I just sent it to a friend, but it was like updated last year.
And it's, it's so brilliant in its compilation of every facet of culture that you love from books to your Tarantino movie, to the Beatles, to art, you know, to scenes of movies.
Like, it was all copied from something, you know, and you're like, wait a second, that riff came from this 1940 song from
this African-American blues guitarist that John Leonard or Paul McCartney stole. And you're,
you know, you're like, and so everything was sort of stolen from somebody. It was imitated,
tweaked slightly. And by the way, that's what we are, right? I mean, you get two people who
exist. And then there's this genetic recombination of their source material. And every one of my
kids are different, but they came from the, and so remixing is like how everything, you know,
happens. And it's like Matt Ridley said, like, you know, ideas having sex. And, and so,
So to your core question, yes, I think that if I do a calculation and I'm using a calculator
instead of like doing math by pencil, that calculation is still an input into my output.
If I'm using AI to generate art and it's my prompt instead of the gesture of my brush and
you know, the strokes of my hand and I think it should still be mine.
Even if it was trained like a great art student was by staring and learning and studying
and then emulating
and then these things evolve
you look at Picasso
through all the different phases
of his style of art
you know from like
realism and portraiture
to cubism and abstract
to these things are just evolved
until you find the white space
that defines you
and that goes back to like
if I train all my AI
and everything I've ever wrote
but then like my voice
is a new voice
is rare and hard to create
so I actually think
we should probably worry more
about how do you break free from this constraints of these things, then, you know, should they
be copyrightable?
Well, that goes back to our earlier conversation.
Do we just end up in this lane that we can't get out of?
Or we don't even recognize we're in a lane, I guess, in some ways, even more devastating.
And the brilliance of all this, again, like, I'm a big believer that we make our fictions
and our fictions make us.
And if you've watched Westworld, I don't know if you've watched an episode or two.
The first episode, you know, you have.
have a guest who comes to the park and he's sort of squinty-eyed looking at the host who's actual
robot you learn later on but he doesn't know at the time and he's like looking at her and she goes
you want to ask so ask and she knew what he was going to ask her and he goes are you real yeah and she goes
well if you have to ask doesn't matter and you know i'm going to sort of spoiler alert on westworld
it's all about these hosts interacting with the uh with the guests and they're there to serve the
guests. But in fact, it's the opposite. Because every host is watching and learning, every
small nuance, every gesture, every inflection of your voice, every cadence of your speech,
and it's learning you so that it can basically create a perfect simulacrum of you and 3D print
biologically a version of you. And so it's a really profound philosophical question about how
we're interacting with these things. But all these things have been trained on the sum total of all
human creation. And now they're being trained on the sum total of human creation plus artificial
creation. And some of that is done with human prompts and some of it is going to be done
automatically. But I just think it's going to be part of the total overture, over of creation.
And I think it's a beautiful thing.
So does anything about this scare you? About AI, like the direction we're heading?
I think in the near term, the thing that scared me is what, again, scarcity.
an abundance, what becomes abundant is people's ability to use AI to produce content. And I don't
know if I'm getting an email from somebody. Did an AI write it or optimize it or was it really a
thoughtful note from John? This young college student who's persistent, was it really them? And can I
infer something about their persistence and their style of writing? Or did they put it into an AI and know
from the repository of what influences me and what I've talked about and what I care about
that they you know so many people were like oh i heard you on this podcast and
i felt compelled to write to you because i too care deeply about family and you know
blah blah right i mean those are surface level stuff but somebody that's more nuanced about it
am i being manipulated by them or by the ai and if it's them there's a cleverness to it that i might
admire if it's just the ai i feel suddenly more vulnerable so what becomes abundant is the sort of
not just information misinformation or whatever but the production of it what becomes scarce as
veracity and truth and that to me was less scary but more you need to be inoculated and immunized
vaccinated and you're almost going to become a little bit more distrusting but like your reactions
right now I might say something and you might say oh and maybe you actually thought it was profound
or maybe you're like this is not interesting at all but but there's something authentic right
about this and we are reading each other and reacting to each other that
to me is going to become ever more valuable. So our humanity, the interactions are the scarce thing
even if and as through other mediums, it's hard to tell. I love that. I always love talking
to you too. So I get so much energy and ideas out of our conversations and I'll be chewing on
this for weeks. I know we always end with what is success for you. You've answered this before.
I'm curious to see how it changes. It really is the eyes of my kids. It is for me, them saying
My dad did that or my dad made that or my dad was present for me.
And I think it's the story I tell myself about my own life and my relationship with my father and wanting to invert that.
And so for me, success is like them saying, I'm proud he was my dad, he was a great father, and I'm proud that he does all these things.
And when we fund a company or like some of these secrets that I talked about, I share them with my kids.
And so I was taking my middle daughter to my oldest plays tennis, my middle does soccer,
my little guy plays basketball, like 10 days a week.
He's better nine years old than I was at 19 and I was reasonably good.
And I like sharing these stories.
So I'm like, you know, next week there's a story that's going to come out about this particular thing
and nobody knows about it except the company and now you.
And they're like, oh my God, really?
And I'm like, yeah.
And like you can't tell anybody, you know.
And I just, I love that feeling.
That's awesome.
And I do it in part, not because I want them to learn about it,
but I want them to be proud of me, as selfish and vain-glorious as that is,
and to be like, oh, my dad's cool, you know.
I think you're cool.
You're not my dad, but, man.
I'll tell you, my 15-year-old daughter definitely does not think I'm cool.
She says, you are so cringe.
I think everybody's kids say that, right?
It's the same with my kids.
Like, instead of telling them something,
I'll sometimes they might be listening to this,
but I'll get my friends to tell them.
Yeah.
And then all of a sudden, it holds weight.
But if I tell them the same thing.
Totally.
Yeah, whatever.
Same thing with our spouses.
Yeah.
Thank you very much.
Shane always great to be with you.
I admire what you've built.
And the repository and compendium of the ideas and the minds that you've assembled,
it's like a great thing for the world.
Thank you.
Thank you.
Thank you for listening and learning with me.
If you've enjoyed this episode, consider leaving a five-star rating or review.
It's a small action on your part that helps us reach more curious minds.
You can stay connected with Farnham Street on social media and explore more insights at fs.blog,
where you'll find past episodes, our mental models, and thought-provoking articles.
While you're there, check out my book Clear Thinking.
Through engaging stories and actionable mental models, it helps you bridge the gap between intention and action.
So your best decisions become your default decisions.
Until next time.