BigDeal - #75 AI CEO Speaks Out On the Dangers of AI (And How to Win Despite It All): Brendan McCord
Episode Date: June 19, 2025Codie and Brendan discuss the profound impact AI will continue to have on the business world, including how it will affect day to day business, the workforce, and even interpersonal relationships. Th...ey discuss the dual nature of AI as both a potential job creator and destroyer, the role of regulation in technological advancement, and the historical context of technological change. Maintaining human purpose and autonomy in an increasingly automated world is emphasized, alongside the need for community engagement and the cultivation of a new generation of thinkers to navigate the challenges posed by AI. Want a powerful tool that doesn’t cost a fortune? Omnisend has everything you need to scale your store - for a fraction of the price. Click here to start for free: https://your.omnisend.com/codiesanchez30 To learn more about Cosmos Institute's research and programs, visit cosmosinstitute.substack.com/ Want help scaling your business to $1M in monthly revenue? Click here to connect with my consulting team. If you are ready to buy a business, get on a call with my team to learn how we can support you: https://contrarianthinking.typeform.com/to/WBztXXID?typeform-source=www.youtube.com Chapters 00:00 The Dual Nature of AI's Impact on Society 02:55 Navigating Job Displacement and Creation 05:56 The Role of Regulation in Technological Advancement 08:48 The Historical Context of Technological Change 12:02 The Future of Work and Human Purpose 15:04 The Importance of Human Agency in AI 17:48 The Philosophical Implications of AI Relationships 20:46 Building a New Generation of AI Thinkers 24:05 The Role of Community in a Technological World 26:58 The Quest for Autonomy in an AI-Driven Society 01:01:29 Navigating Relationships and AI Dependence 01:03:55 The Ethics of AI and Human Interaction 01:09:12 AI in Healthcare: Risks and Benefits 01:12:35 The Role of AI in Governance and Decision Making 01:17:41 The Future of AI: Autonomy and Labor 01:24:03 Competing in an AI-Driven Economy 01:30:01 Cosmos Institute: Bridging Philosophy and Technology 01:35:22 Lessons from Success: Philosophy and Learning MORE FROM BIGDEAL: 🎥 YouTube 📸 Instagram 📽️ TikTok MORE FROM CODIE SANCHEZ: 🎥 YouTube 📸 Instagram 📽️ TikTok OTHER THINGS WE DO: Our community Free newsletter Biz buying course Resibrands CT Capital Main St Hold Co Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
The stuff we're building now, it mediates 20% of waking life.
What we think we know about AI today doesn't necessarily hold for the future.
I think in this decade, for the first time ever, anyone who can think clearly, take action,
and communicate their ideas will get rich.
But also, we may lose more jobs than ever before.
Be replaced by robots and AI in ways we can't even fathom now and potentially leave an entire generation behind.
I think it's a tale of two worlds.
So today I'm bringing you Brendan McCord, one of the sharpest minds.
in AI. What will happen to our human need to labor? I think more jobs will arise out of this growth, but there'll be
unpredictable categories. It's an unknowable future. We've become this nation that used to be owners,
now less than 10% of us own a business, and maybe there is a way that AI can help us take that back.
Will people use artificial intelligence to realize a new kind of independence? Alternatively,
will people use it as an auto-complete for life? Will they basically say, hey, tell me what to do? If we become
a nation of order takers or sheep from artificial intelligence, it will kind of solidify this
trend, whereas this is our moment to reverse it. But we're almost like a little puppet on the
strings in some ways. Exactly. Wow, that's so fascinating. Hi, I'm Cody Sanchez, and this is the
Big Deal podcast. Today I'm bringing you Brendan McCord, one of the sharpest minds in AI. Brendan's not
just thinking about where AI is going. He's thinking about where we're going and how to stay human
while we get there. He built not one but two AI startups that got acquired for $400 million.
And prior to that, he was deep within the government where he was a founding chief architect
at the Joint AI Center for the U.S. Department of Defense. He also was part of the DOD's first
applied AI organizations. And he led the strategy, so both classified and public versions,
to figure out how to do machine learning at the Department of Defense. Now he's working at
Cosmos Institute, which he founded. And I believe he's attempting to train the next generation
of philosopher builders who believe in humans and that AI should support us not the other way
around. This podcast is going to help tackle the question of what will AI kill first? Can we make
money with AI? And do humans still thrive? It is a big mission. So this episode today, I think,
could change your life if you listen closely. It's about tech, sure, but it's also about power,
autonomy and building a world worth living in. And so I wanted to have you on to talk about AI because
it's been dominated in my mind lately. And in a very like neophyte, normal human, it is not part of my
day to day that I think about AI. I own laundromats for fuck sake, you know? So like, this is not core.
And yet, even in my business, I feel this visceral fright and also excitement about what is happening
in the age of AI. Microsoft fired 7,000 people, I think.
recently. The CEO of Upwork sent a note that I thought was fascinating to all of his employees. I don't
know if you read it. And it basically says, it does not matter if you are a programmer, designer,
product manager, data scientist, lawyer, customer rep, or a finance person, AI is coming for you.
If you don't make that move, you're going to be out of work. Not only in our company, but also
across the industry. There's not going to be demand for people who are working like there
were five years ago. So my question to you is, what is going to have?
happen with AI and jobs and what will happen to our human need to labor with AI?
So I think like if you're a CEO, you've got to ready your team for like for the future,
like as you see it. You don't know. I don't know. Like where things are headed,
it's an unknowable future. But you've got to make an attempt and then get your team focus on.
And so you got to jolt them sometimes. And I see it in.
some of those letters where it's like, hey, this job won't exist. But I think underneath that
is probably a lot more positive story. In other words, like, you sort of have to separate, like,
the work of the CEO to like rest attention to this and say, like, you know, stop doing the thing
the way you were doing the thing and start really taking this seriously from the underlying
mechanic. And the way I kind of visualize it is like the economy is going to grow. And it's
going to be this like, you know, growing sphere that on the surface will have all sorts of cracks,
you know, and all sorts of like tectonic relocations and shifts and stuff like that. And so I think
more jobs will arise out of this growth, out of, but there'll be unpredictable categories of jobs.
And by now, I think people have analogized this a lot to the past. But like a hundred years ago,
we had an 80% agrarian economy.
We were like farming.
And if you had told people that we would have a 1% agrarian, you know, 1% were in agriculture,
they would have been all kinds of skeptical.
They would have said, well, are people not eating grains?
Like, no, we're still eating grains.
Are people, you know, just out of a job?
Are they just, you know, using leisure?
No, people work pretty hard still.
I mean, we have a lot more leisure hours, actually.
we do have like four times of leisure hours, as we did then.
But all these categories of jobs would have arisen that had, you know, like there's a
stat that, you know, the beginning of the film industry, you know, would people have been
able to imagine that the kind of movie budget of Avatar was like more than, I don't know,
the GDP of the United States?
I mean, I may be like a little bit wrong on that, but like the growth is just staggering
and weird and we don't realize.
But the other, another way to think about it is that like,
the bottlenecks, the bottlenecks have shifted over time.
They were like muscle in a lot of the earlier economies.
They were then like clerical work or repetitious work in others.
And now there are things like maybe compute, right?
Like when, you know, my friend is building Stargate here, actually, with Crusoe.
And like, they're rushing to build this because there's like literally not enough compute.
Explain what Stargate is for anybody that doesn't know.
Yeah, Stargate is this like kind of like epic scale.
project to build massive data center that can train the next generation of AI. The whole idea here
is based on just like we had Moore's Law, like people would be familiar with Moore's Law as a way to
describe how chips advance over time, how they get denser and better. We have this thing called
scaling laws where AI, you know, we can throw more compute at it and we get a bit, you know,
we get more performance and we think that we can reliably predict that trajectory. And so what that
does is it means that like CFOs and venture investors can invest ahead of that. They can say,
look, I don't know what the next generation looks like, but I know we need a billion dollar data
center and a 10 billion dollar data center and, you know, on and on and on. And so they're building
it really quickly. And by the time it's built, you know, they're going to need to build a bigger one,
right? And so these are new bottlenecks that require a huge amount of jobs, a huge amount of like
labor content. And so figuring out sort of where those bottlenecks, you know,
are, that's an important thing. The other thing I would say, though, about the job is that, you know,
it's a good time to go back and think about, like, a lot of the thinkers at the beginning of the
mechanization that happened in the Industrial Revolution were, you know, they were very positive
on, like, what automation was, but they thought about, you know, the job as more than just
an economic thing. In other words, like, we're not purely economic beings or even primarily.
Like we, we do need work.
We do need food.
We need, like, you know, material comfort or we seek it at least.
But, like, the job, I mean, it's clearly something more about, like, self-architecture or becoming.
Like, we develop through the work that we do.
And so this is a, I think, mindset shift that probably is necessary to sort of think
think less about the narrow sort of definition of what a job entails. Yeah. Well, there's two things I
want to double tap there. I saw this image the other day that I thought was so fascinating. I actually
didn't realize it happened this fast because I was thinking, well, how fast will AI move? Like, how fast do I
need to move? What is this angst I need to have on AI? Should I be integrating it faster? And I saw
this gent on the internet share a picture of Fifth Avenue in New York in 1900. And we can put it up here on
YouTube. And it's like filled with horses in 1900. And then 13 years later, the same Fifth Avenue
Street has zero horses and is filled with cars. And then I got curious and I was like, well, how many
horses were in the U.S. in 1900 versus today? And I found it was 21 million horses in the U.S.
in 1900. And today, somewhere between six and nine million. Obviously, we have grown the population,
human population a ton, but we've actually decreased those poor horses, replaced with cows that
we now eat. And so I guess my question for you is, like, maybe it's that same question. It's,
it's how fast will this scale and move in relation to the industrial revolution or even just
automobiles being created? Like, instead of 13 years, is this three years where all the quote-unquote
horses are gone and replaced with AI agents? Yeah. There's a guy who writes about, like,
technological diffusion and he goes back to like the invention of I'm going to say the tulip but I don't know
why a tulip would have to be invented it's a natural thing but I think what I mean by that is like the
spread of tulips like tulip growing I promise he says that otherwise it wouldn't be in my mind
but he he writes about all these diffusion curves across and like one thing you have to keep in mind is
the technology even if it's immediately available takes a long time to like propagate in other
words, you could stop right now. Google just did its big announcement. Like stop, pause,
you know, nothing else. And we would have a few decades of like very profitable,
meaning like good uses of technology to just harvest. Right. And why is this? Well,
I think a lot of it is the government. In other words, I think that individuals are able to
adapt relatively quickly. Like some things are deeply ingrained. And so my
parents will never use AI.
I'm convinced, right?
And it's not because government regulation is getting in the way.
But for many of us, we have a kind of plasticity.
We've grown up in an environment where we expect to change.
It's wired.
So we can take hold of it pretty quickly.
And certainly when we're pushed and incentivized, we do.
Like if we're in a company for fiduciary, it's like, no, we're going to get our lunch
eaten.
Like we've got to go.
So companies and individuals, especially young ones, want they're hungry for
like the adaptation that it entails.
But a lot of things get in the way.
A lot of things get in the way.
And mostly those are interventions based on this like regulatory scaffolding.
I say this because one of the most important things to always keep in mind is in thinking
about a society, is the society adaptive?
Can it adapt to new technologies to change to circumstances to new preferences?
open societies, liberal societies in the old sense of the word, they can.
Societies that are run by top-down mandate, they're highly, highly fragile.
And so this is where, you know, U.S. has a distinct advantage versus China versus Europe.
Like these are extremely maladaptive, though they may be more efficient at any given moment to run on a top-down basis.
it's the decentralization of the U.S. as much as we've preserved of that, that really renders us
like the leaders.
So could there be an argument then that, for instance, the U.S. today definitely has more
regulations than it did during the 1900s.
There's more of a stranglehold overall federally today than there were in the 1900s.
And so, in fact, maybe usage of AI and proliferation, even though,
it can go faster, aka you can produce things on the internet faster than you can in assembly line of
cars, it might actually go slower because we have so much regulation.
Yeah, I think there would be, it's like regulation tends to act on the counterfactual,
meaning like there's a world in which we, there's a world in which 50 years ago we started using
nuclear power, and that world is crazy and awesome.
But we didn't, right?
We discovered it.
We split the atom, you know.
And then we didn't do anything with it.
I mean, I did on a submarine, but, like, civilian use in the U.S. was nil, essentially, right?
And so the problem is with these things is, like, you don't get to look at that future.
You can imagine it.
But it's like, we didn't get that, you know.
We missed the civilizational subsidy that it entailed.
So you make a good point, but it's like we have to compare, like, what might have been if we had had fewer restrictions.
So, you know, I always try to keep that in mind because the counterfactual is like, is the thing that matters.
It's a good point.
But you do see in Europe right now, for instance, it's like a couple articles this week where they said we are severely behind when it comes to AI because we have sort of too much regulatory burden on people, which basically is just too many laws.
Yeah.
And it's cool that Europe can see it.
The only reason they can see that is because the U.S. has fewer and so we're ahead.
That's the counterfactual.
But like the reality is, yeah, if there's another world.
there's this other like US without the regulation you know that it does have that's going even
faster and farther and in that world we've like cured cancer you know or whatever we just we just
don't you know we don't get to live in that world so when you see these headlines that are like
your job will be gone in three years we will lose you know a fourth of the labor force we will
have to instill universal basic income you know we will have potentially a recession for the
many and proliferation for the few.
What do you think?
Like, is that wrong?
Are they wrong?
I mean, I think it's wrong on many levels.
Yeah.
I talked about how I think the, like, they're still going to be bottlenecks.
They're still going to be, like, one of the things that's pretty, you know, that humans
are extremely good at, particularly entrepreneurs, is like being alert to opportunities
that come from interesting recombinations of inputs and like an awareness of preference and of
like local circumstance. I know that's a very academic way of describing entrepreneurs. Israel
Kersner, a scholar, has this like beautiful framing around alertness. But like that never goes away.
We need people to be alert to things that humans want and then have the kind of creativity
to find and experiment with what they want. Never goes away. The, as I said before, like the categories
of jobs, the people I met here, you know, the former Mr. Beast president, like that job is
unintelligible to somebody in the 1980s.
That doesn't make any sense that
you would have to sit with somebody
for 20 minutes to explain what that job
even meant.
And so there's that point.
On the UBI point, I think that
UBI crucially separates,
like untethers,
one's contribution from one's reward.
And I don't mean this in a kind of moralistic way,
like we need contributors.
I think we do need contributors,
but I mean that the price
mechanism in markets is the thing that tells us what is worth figuring out about, what people
value, where the opportunity is, right?
It's like a heat map that goes around and says, look, this profit pool, the only reason the
profit pool exists is because there's something valuable there.
People value it.
And so if you separate, you know, the reward that somebody gets from private property,
you first of all, you kind of abolish the system of private property or substantially mitigated
to useful.
But you now have people that have no kind of need for directedness towards mutual benefit,
towards helping other people.
That's a crucial slowdown.
Like that's a guaranteed way to levelize and slow down an economy.
You saw this in Britain after the war, by the way.
Like they didn't do UBI exactly, but they have a ton of, you know, health service type, you know,
things that basically took a natural distribution of income, meaning.
that which results from individuals acting freely and, you know, trucking, bargaining, exchanging,
like Adam Smith would say, you take that distribution and you flatten it and you introduce
these like kind of national health service style things. And what happens, the economy predictably
comes to a halt. So I would be much less, I'm much less focused on kind of managing a distribution
of income. In other words, saying like there shouldn't be Elon Musk.
and there shouldn't be this thing and like applying a pattern that I think is right, right?
I would reject that.
I would much more prefer to work on how we create greater wealth that can be distributed,
that can be, you know, generated across a society that can help people, you know, who live
today and who will live in the future.
Like the wealth question is the more important one.
Yeah.
I also think it's interesting because when you go back to history, there's really not great
examples that I can think of, and you are much better on this than I am, of when you go back
through history and technology came and an eviscerated opportunity and, you know, decreased a population's
ability to earn and work. But you have example after example, example of the opposite. And so my case
against UBI has always been, well, one fundamentally, I think, that purpose and labor is, like,
uniquely human and it makes us happy. And that would be like an Arthur Brooksism, that, you know,
without that.
We're not very happy, in fact.
But simultaneously, there don't seem to be very many examples or counter me here.
Are there very many examples where technology was introduced and it became worse for society
to earn?
Now you could make the argument tech could make us less happy, which you could say social
media did in some ways.
But has that ever happened through history?
Yeah.
So locally it has, right?
So if you have like a group, a community, a region of, I don't know, Indonesia that's set up around the sewing machine.
Like that's what they do.
They produce clothing, textiles.
They sew.
That's what they've been trained on.
And then you have an automated sewing machine that can do that.
Then what do we expect?
Well, we expect to have dislocation, job loss, that sort of thing.
The key there is that this is a moment for adaptation where this.
it doesn't take these people off of the map.
What it means is that the next thing that they need to do is now in view or that they get
a kind of motivation to figure out what that next thing is.
So we're a highly dynamical system.
Like we're constantly adapting.
And every opportunity that technology obviates for the individual, meaning everything
that technology can handle with automation simply opens up a new frontier.
That's what I hear when I see the CEO saying, hey, this thing, your job as you know it is over.
The other side of that coin is like the job you didn't know was needed, the door just opened.
Yeah.
And to your point, maybe that job has 4x more leisure time for you with a higher potential earnings.
Yeah.
I mean, empirically, it does, right?
Yeah.
Like, yeah, we've, the lifetime leisure hours have gone from like 40,000 to 160,000 more.
Wow.
And so this is like the marvel of capitalism is that it's given us more.
leisure. Most founders think they have a traffic problem, so they throw money at ads, more content,
more clicks. But the truth is, your site visitors aren't the problem. Your follow-up is. If someone
visits your store, ads to cart, then disappears, most platforms just let them leave. Omnysend
doesn't. It triggers abandoned cart flows, win-back emails, and even browse abandonment messages
all on autopilot. It's not just email, it's email plus SMS with smart segmentation built-in.
You can trigger a discount code to someone who clicked but didn't buy.
You can follow up with a new product after someone made a purchase.
You can even A-B-Test everything to find what actually converts.
And you don't need five different apps to do it.
Omnysend gives you all the tools without the Frankenstein stack.
If you're bleeding leads and calling it low conversion, stop.
You don't need more eyeballs.
You need a machine that closes.
That's what Omnysend is.
It's so interesting.
It really needs to be a campaign for that because I think people today think we work more than ever, you know,
it works incredibly hard.
And then you go back and you try to do,
I mean, we do these series where we go.
And I work in a porta potty business for a day.
Yeah.
I'm like, fuck me.
This is way harder than working on the internet.
Like just categorically, it takes longer.
It's more work.
Yeah.
And so it is funny that there's not a PR campaign for that and there probably should be.
I mean, I think it's part of the human experience that the things that work well,
we're least aware of.
You treat them as water.
I went to Harvard Business School, which is,
like a mecca of capitalism or a West Point of California.
I don't know what to call it, but not West Point,
Naval Academy of capitalism.
But there was one class on how markets work.
And it was called reimagining capitalism.
And the focus was on what's wrong with it.
And it was the solutions were things that I later came to be aware
were things like syndicalist socialism,
you know, like worker communes, things like this.
It's like, wait a second, we should study
the promise of markets. And so I did this. Like basically I got introduced to Tyler Cowan
and who's the, you know, chair of Mercatus. It's Latin for markets. He's one of the top
political economists of our time. And he, we created something where we brought entrepreneurs in
to like read about the fundamentals of markets. Read theory of moral sentiments. Wealth of
nations. Madam Smith. Read like these kind of foundational texts because people don't even understand
what markets do. They don't understand.
how they work. They only kind of criticize things based on, I think, often ill-formed conceptions
of justice that have arisen around like the market critics. Yeah, you know what's interesting
is I think for young people listening today or really for anyone. I mean, if you want to
break out in your life successfully and you are curious, this is an incredible time with AI to have
knowledge at your fingertips. Golden Age. It's the Golden Age. And also simultaneously,
you say, I think it's an incredible moment to matter in history. Because, I mean, think about where we
are in Austin. You've got, you know, University of Austin, this campus that's happening here,
where people are trying to do free market principles and think about capitalism in a unique way.
Then you've got University of Texas Civitas, Ryan Streeter. Then you've got your Cosmos Institute.
There is like this revival, it feels like, to me, that is happening of core principles.
And I think, you know, for most of my lifetime, those were not.
highly regarded and had been under attack in some ways. So I also think it's interesting for young
people. Like if you want to find a group of people who really care about laboring and production
and who will tell you that like when you get lost in that thing that you're so obsessed with that
you lose track of time, that that's okay and beautiful, this is like a really cool time to do that.
Yeah. And, you know, I want to, I guess there's one. Can I just say one thing about that?
So I actually a couple things, but I'll say I'm quickly. So one is like I thought of another thing
on the reading list.
Emmanuel Kant wrote,
What is Enlightenment?
And he channels Horace, I think.
When he says, he says in Latin, dare to know and have the courage to use your own
understanding.
This is such a profound thing.
Like, this was the Enlightenment project.
He was reflecting on, we always talk about like the Enlightenment.
He writes this like definitive reflection on what the Enlightenment was.
And it was like, you know, we previously would be told by our betters what to do in an
aristocracy or in a monarchy, you know. Now it's like we have the courage to use our own knowledge.
You made me think of this when you thought about, like when we talked about people who are
curious using AI systems, like my three and five year old, it is a golden age for them to figure
out my five year old wants to learn gardening. She literally can go to chat GPT and has and is learning
how to become like a gardener. And it's like I'm just blown away. So cool. And then I do want to talk
about the Austin scene, because I actually do think that's pretty powerful.
You know, people will know this, but like, that enlightenment thing, it happened in coffee
houses in Edinburgh.
You know, the Greeks met at the Agora, right?
The French, like we talk about salons now, the Parisian salon.
And so every moment, there's this, like, there seems to be this need to, like, get together,
bump into one another, work on projects.
And yeah, the ones you mentioned in Austin are.
really special. I would say Austin, I'm bullish on its ability to become, you know, not only a
great startup hub, I think it's like probably fifth, fifth best startup hub in the in the, in the,
in the U.S., and therefore, the world. But, but I think I'm even more bullish on its ability to
like fuse the kind of entrepreneurial mindset with deeper thinking about the, the goals
that technology might serve, like the ends of technology. My pithy way of saying it, because I care a lot
about human autonomy. I think this is like a really critical thing that I'm trying to raise awareness on.
It's like we can be the capital of autonomy. That means autonomous systems like what Elon Musk is
building, but also this philosophical conception of what it means to be an autonomous human being.
Yeah, I mean, you're so right. I mean, even Silicon Valley, right? That was sort of the center of the
internet for so long. It's arguably still is in some ways, but in some ways isn't any longer.
Yeah.
And so, yeah, if you want to play a game, go where the game is played.
And so I think if you believe in sovereignty and, you know, the power of the individual
and the human, Austin's an incredible place for that.
And what's interesting is a lot of those cities, they weren't like major thoroughfares
that would become the New Yorks and the San Francisco's of the world.
I mean, there was also, you know, Amsterdam, you know, when you come to like base level
free market principles thinking, you know, that happened in Amsterdam's lawns too.
Yeah.
And so, you know, it kind of takes me to one of your points that I thought was so interesting.
You talked about something that's near and dear to my heart, which is how in the U.S., I believe
that we have slowly become this, I say a nation of serfs, you say a nation of employees.
We've become this nation that used to be owners, 80% of us owned things, now less than 10% of
us own a business.
And maybe that there is a way that AI can help us take that back.
We can reverse a trend.
And those trends are really hard to reverse.
I mean, since the 1800s, we've sort of slowly lost our ownership of businesses.
Can you talk about your nation of employees becoming entrepreneurs?
Yeah, so I love this.
So Jefferson, his idea for early America was that we would be independent, we would be self-reliant.
We would be a nation of farmers.
And he really cherished.
I mean, he brought into being this idea of, you know,
know, the gentleman farmer.
You know, Tolstoy writes about this with Constantine Levin in Anna Karnina.
And this is like a beautiful archetype of like self-reliance.
But it really was true in a sense for the free people within in America that they were
self-reliant.
Like I have my mother and father-in-law.
They have a ranch an hour and a half outside of here.
and I grew up in like D.C. suburbs, so I didn't really know what this meant, but like you are
very entrepreneur, very self-reliant in managing your, your affairs. There's no one you can appeal to.
There's no process you can appeal to. You don't have a boss, right? And so think about a nation
like that and what they would have voted for, what their preferences would have been, how they
would have governed themselves. Now think about a world in which like 10% of us,
are self-employed entrepreneurs, whatever it is, you know,
and most of us are spending all day every day,
listening to a boss, following a process,
kind of turning our mind off, that sort of thing.
The effect that it has on preferences is really profound.
Like Aristotle writes that we are creatures of habit,
and this is like a very well-known phrase,
but if you really take it seriously,
like humans are creatures of habit.
And so if we are habituated day in and day out,
to be kind of told what to do,
given instructions, here's how you follow the process.
It changes us fundamentally.
And I'll give you a great example.
So people know that East and West Germany grew up under two radically different systems.
East Germany, Soviet system, West Germany, more like the West.
And in COVID, so decades after the fall of the Berlin Wall, you have very different responses
among people that grew up under the system of Soviet communism versus the West,
they're more compliant, they're more compliant,
they're more willing to listen to authority.
And I just cite this because it really seems to be the case
that the more we are free, the more we are independent, the more we love it, right?
Could you ever go back to being an employee?
Hell no, you could never go back, right?
Could I ever go back?
I could never do it.
I mean, I might, you know, if I have to serve in government or something like that, but, you know, but you can never do it, right? You just end up loving what this freedom. It plays a huge role in your life. So all this is to say that, like, we find ourselves at a really precarious moment because we've become a nation of employees. Now you bring on artificial intelligence. Will people use artificial intelligence to realize a new kind of independence? I hope so to like raise their potential to be more entrepreneurial. I think that's the hope.
alternatively will people use it as an auto-complete for life
will they basically say hey tell me what to do and I worry that at the top labs
the AI labs that are really driving this a lot of people behind closed doors
believe that we should in fact do that that we're being foolish rebels if we
don't listen to artificial intelligence tell us what to do they're missing something
that's fundamental to human flourishing.
You can tell them passion about this.
And so if we become, you know, a nation of like order takers or shape from artificial
intelligence, it will kind of solidify this trend, whereas this is our moment to reverse it.
Wow, it's so fascinating.
You know, it kind of takes me to a point I was thinking about before with you, which is Elon Musk,
I heard him say, that who develops AI and the principles they're.
follow will decide whether AI becomes the best or the worst thing to happen to humanity.
And so what role does human judgment or the people who are determining our AI play in what
eventually comes into our hands? Yeah, I think it's everything. So I'll give you some examples.
Like if you look at past revolutions, so the printing revolution is a big one. And I think about early,
we were talking about early America, so I'll stick on that.
So Benjamin Franklin was a master of printer.
He came 200 years after Gutenberg, but he took the printing press, and he took an
enlightenment idea that knowledge should not be controlled by the church.
It should not be controlled by the state.
And he built a lending library and a network of independent publishers.
After Franklin, Yosef Gerbils, the Reich Minister of Propagana, took
the same technology again another 200 years and twisted it into a system of mass manipulation so propagandized
the country consolidated around the Nazi party vision this just highlights for me that like the technology
was not different like what franklin was doing with his independent publishers and what gerbils was doing
in kind of like corraling all the news same technology right enormous difference one built for freedom
the other built for control.
You see this just to give another example,
like the inventors of the World Wide Web,
of the Internet,
Tim Berners-Lee, as one example,
really had, like, openness at the core, decentralization.
That was like a major, major philosophical position
they went into it with.
They wanted the Internet to grow discourse.
China, in its great firewall,
use that same Internet technology
as a way of saying, aha,
we can now control
what billions of people see.
We can create a conformist state
that consolidates and sustains power.
So I think Elon's right.
I think you need to have people
that are focused on the human goods,
focused on key ideas that underpin the West,
building these systems.
And if you don't have that,
you're going to have one of the most perfect systems of control
ever delivered to human society.
Okay.
When I sell my business, I want the best tax and investment advice.
I want to help my kids, and I want to give back to the community.
Then it's the vacation of a lifetime.
I wonder if my out of office has a forever setting.
An IG Private Wealth Advisor creates the clarity you need with plans that harmonize your business,
your family, and your dreams.
Get financial advice that puts you at the center.
Find your advisor at IG Private Wealth.com.
When a country's productivity cycle is broken, people feel it in their paychecks, their communities, their futures.
What does this mean for individuals, communities, and businesses across the country?
Join business leaders, policymakers, and influencers for CG's national series on the Canadian Standard of Living,
productivity and innovation. Learn what's driving Canada's productivity decline and discover actionable solutions to reverse it.
I think that's why what you're doing at the Cosmos Institute is so important to give you a little.
little plug here, but there's two things I think are fascinating about the Cosmos Institute. One is
your speed to action. I have just found with everyone that I've met that is successful, the faster
you move seems like the more money you make over time. And like the fastest people I know are
often the most successful. And I think that's counterintuitive because when we're young, we're told
like be careful, don't take risk. You know, what if you fail, blah, blah, blah. And yet all of my
friends who are quite successful, they move incredibly fast. And so I think how long is the Cosmos Institute
been around now? We're now like a year and a half.
Okay. So a year and a half from zero idea, basically, to, and then you can explain where
you're at today with having done grants for a ton of AI entrepreneurs. And kind of kind of with this
huge idea of we want to create the next generation of philosopher builders, let's say.
So let's talk about this. And for the people listening, what I want you to think about is like,
when you have like a big, huge, ridiculous idea, how can you move?
fast on it and how can you get inspired by somebody who was like, I don't want to just,
not that there's just, but I don't want to just create the next AI company. I want to force
function change how all AI companies are built by providing a platform for everyone and a thesis
and then rallying some of the biggest names in the world around it. Yeah. So can you talk about that on
you? You should be our pitch person. So what we saw, yeah, we think we need more Franklin's,
We need more philosopher builders like Franklin who can take ideas about human flourishing
and actually use that to inspire what they build.
The stuff we're building now, mind-blowing statistic, but it mediates, meaning AI mediates,
20% of waking life.
So this is very, very serious in terms of being the kind of hidden structure of civilization,
of human thought.
And by mediates, you mean it takes up about 20% of our waking hours.
Yeah, or like the information.
Yeah, exactly.
It guides it, the information you see.
You know, it's not always perceptible.
But we're almost like a little puppet on the strings in some ways manipulated by AI for good, bad, or of our own will.
Exactly, exactly.
And so it just plays a massive, massive role now and in the future.
That number is not going to go down, right?
And so more important than ever, like this is a moment when philosophy matters and when the builders matter.
So we said, okay, that.
must be so that we need a new kind of technologist. How do we get them? You look across the landscape
and you see universities, almost without exception, as being a place where people who are either narrow
in their specialization, narrow technician, or conforming ideologue, who really is not thinking
about these questions very deeply. They're kind of indoctrinated to one view of the world. That's not
helpful. So next, you look at the tech companies. The tech companies are incredible at building these
systems, but they tend to focus more on like the means and not on the ends, right?
Focus on the customer, that stuff's all good, but like on the overall end how it impacts human
life, there's not a lot of deliberation there, right? Or if it is, it's all done like kind
of privately by the people in tech. And then think tanks are, you know, really smart, but they
they theorize.
They don't build.
And so we got to build, you know,
we realize we got to build a new institution,
a new,
and we call it an academy.
I will just say that there have been moments when,
you know,
just because in case people think this is crazy,
like some institutions have risen to the occasion.
Like I mentioned the radiation lab,
and MIT, right?
It took, you know, engineers,
made them into inventors
that helped win World War II.
Or another good example is University of Chicago.
Yeah.
It took scholarly economists, tournament reformers,
who freed markets across five continents.
And so you can do this.
Like you can create an institution that's fit for the moment.
And that's what we're doing in Cosmos.
We started a year and a half ago.
We got an incredible group of people around it.
Tyler Cowen, he was very optimistic about, you know, AI,
people like Jack Clark, who co-founded Anthropic,
who's a really subtle thinker about, like,
possibilities of risk and so forth.
some incredible AI researchers that are deeply philosophical,
some philosophers who are also technical.
So we like kind of have this cool hybrid crew,
very, very special concentration of talent.
Just look at them on our website and you'll see.
But we moved out with a kind of like private sector urgency.
We're a nonprofit, you know, we're 501c3.
But like I didn't hire anyone who comes from the nonprofit world.
We were super entrepreneurial.
I tell people before they come, like this is not a job.
Like this is like we're kind of.
kind of zealots for the mission. We work extremely hard. And in a year and a half, we've backed
over 50 projects. We have over 10 fellows. We have an entire new AI lab at Oxford. It's the first
in the world to combine philosophy and AI in this way. That lab has shipped open source software,
a philosophical paper, an AI paper, like it is moving out quickly. And we have put out, you know,
hundreds now at this point of content pieces, including videos that have gotten, you know,
over a million views around deep topics and philosophy, right? We've like, you know, the work
we're doing with Jonathan B. If you, you know, he'll sit down for 90 minutes, talk about consciousness
with the person who works on that at Deep Mind. People are really deeply interested in this.
So there's a market demand. And when I get like Twitter DMs and stuff, one of the cool things
is that people are recognizing that whereas before,
in AI, there's like a kind of group that is focused on the doom scenarios.
These effective altruists, existential risk people, rationalists,
and then there are the accelerationsists at the other end.
But Cosmos is really, I think, making progress at bringing into being a third way
that's focused on the human good, you know, on human flourishing,
but in a really serious technical sense.
So, yeah, it's been pretty, pretty wild right.
And you guys are giving out a million dollar grants, too.
So if somebody's listening and you have a compelling AI idea project, what's the parameters?
Yeah, okay.
So it would make somebody be able to get a million dollars from Brendan today.
So, well, so last week, I went to San Francisco.
You and I were together in Austin.
I flew over to San Francisco for a night.
And we announced, Barry Weiss announced, people know Barry Weiss.
that Cosmos is partnering with fire.
Fire is foundation for individual rights and expression,
and they are the very best when it comes to defending things like free speech, right?
They're what the ACLU could have been.
But they're incredibly principled, they're non-political, nonpartisan, wonderful organization.
We think that the future of free speech, it's about AI, right?
like I mentioned the 20% stat.
So increasingly, this is the information comments.
And what we have access to, what reaches our mind, what forms in our mind as thought,
it's going to be shaped by AI.
So fire is really, I think, at the forward edge of like leaning into this saying, yeah,
this is where the thing is headed.
So we put together this program to sort of say, look, we need builders to get engaged right now.
And we need them to build stuff, particularly open source stuff, right?
because we want to build things that maybe wouldn't have naturally been built at a company.
I love it if like, you know, X.A.I or meta or anyone is working on this stuff, I think that's really cool.
But we want to kind of motivate the independent builders to create systems that promote truth seeking.
And what we mean by that is in order to discover truth, in order to kind of have truth matter in our lives, you have to have a clash of ideas.
John Stuart Mill tells us this.
Chapter 2 of On Liberty, you've got to have this contestation, this clash.
And in order to really think deeply and develop insights that move away from falsity, you have to
ask good questions.
In both cases, there are problems with AI today in terms of the extent to which you get
to see and encounter these minority views or these, you know, provocative views and in terms
of how much it tends to make us want to ask questions.
So we want to get ahead of that.
and we've already seen, you know, 50 applicants, we just announced it like a couple days ago,
but I've seen 50 people apply.
There's a lot of demand for this, I think.
We're going to do a hackathon next week in San Francisco.
We're going to be out there a lot.
We'll do some in Austin.
So we're just, we're building a movement around this because AI is the place where free thought
and expression, you know, converges with technology.
Yeah, you know, I was thinking about it, too, to go back to Elon again.
But, you know, he said AI is.
a fundamental risk to the existence of human civilization. And he's also said, if AI has a goal and
humanity just happens to be in the way, it will destroy humanity as a matter of course without
even thinking about it. It's just like if we're building a road and an ant hill just happens to
be in the way. We don't hate ants. We're just building a road. And so I think a lot of what you're
doing is trying to make sure that AI serves humanity and doesn't dominate it. On a micro level,
for like everyday humans, like me and you,
who aren't going to change the future of how AI interacts.
How do you think about making sure in your day-to-day you use AI and you don't get used by it?
Do you have frameworks you think about?
Yeah.
Well, for one, I would say I don't accept Elon's framing of risk.
I think he's captured by a lot of risk thinking in the valley.
And I think he's fighting a sort of internal tension between what he knows.
knows to be true about the open search and the innovation requirements and the kind of doom
scenario, which is really, you know, it's a really problematic framing that comes from
Nick Bostrom, from Eliasor, Yudkowski.
We don't have to get into it too much, but essentially the argument goes like this.
Like, it's essentially the case that they say, you know, the likelihood that AI will destroy us
is a probability between zero and one.
And you're kind of like, okay, yeah, well,
kind of all probabilities are in this world, right?
It would violate the laws of epistemology
for it to be otherwise.
And by the way, if it destroys us,
then that's negative infinity utility.
This group talks about utility quite a lot.
And so it crushes us in terms of like
whatever future well-being we'd have.
And it's a negative infinity kind of disutility event.
And so very small number of times this negative infinity is negative infinity.
Thus, we have to do whatever it takes to stop AI.
We have to pause.
We have to bomb data centers.
I mean, there's a recent, a guy just got let go from the Center for AI Safety because
they found a podcast of him saying we should like attack tech companies, not to kill
them, but to like, you know, destroy their facilities, destroy their property.
So this is an extremely radical movement, and the axioms, like the assumptions, the premises
are wrong in my view.
And so the sort of object level argument that follows from this, or like the deduction
from these axioms, then is all kinds of problematic.
And so I would say Elon has caught a little bit of that.
And he's kind of moved out of that thinking a little bit, but it's still kind of problematic.
in the background. Anyway, the question was more about, like, what do I actually worry about, right?
Yeah, and like, how do you, how do you as a normal person deal with this in your day-to-day?
On one hand, if you don't use AI, you'll be left behind. On the other hand, what if AI starts to
control you? So how do you create personal boundaries on it, basically?
Okay. So I think that's really good framing, actually. So it's kind of a silo and caribniz effect,
where you, the world is very complex. Like, Veneva Bush wrote an essay called As We May Think,
in the middle of the 20th century about how we were going to experience this like mountain of
information and we needed something like the computer you know he was like writing a long time ago
to handle it and so i think this is a key fact is like the world is complicated there's a lot of
information if we don't use AI we will simply be left behind i think we have to like embrace that fact
like it's not optional you could live a kind of a life like thorough you know and like in you know
in the woods i guess but for us who want to be part of civilization we have to use
I, okay, then the question becomes, how do we use it to achieve our goals while not having
a kind of erosion?
I'll talk about like what that means.
I would frame it as the debate between agency and autonomy.
And for most listeners, they'll be like, well, those are the same thing.
Like, why are you talking about these?
They're not the same thing.
And I'll try to illustrate it this way, which is agency is about if you have, and by the way,
I think that people get this wrong because agency is a very popular word in 2025.
I think people are talking about it slightly wrong.
The philosophical definition, I would say, is like means, ends effectiveness.
What I mean by that is for a given end, so for a given goal, end means goal, right?
Can you select the means to achieve that goal?
So can you kind of push the button?
If the goal is like cause the thing to go, I don't know what thing we're talking about here, but can you push the button?
Can you?
I want to increase my sales.
Can I have more sales calls?
Yeah, can you have more sales calls?
Exactly.
That's a good one.
So it's like a narrow instrumental choice based on a predefined end.
And autonomy is something deeper, thicker.
It's can we deliberate on the end in the first place?
So can we effectively select our own ends?
And are they really our own?
And an example with AI that comes up is if you're only ever shown a couple of choices
and something externally engineers the choice.
right sometimes people call this nudging is when you set a default or you manipulate a choice architecture
like that may be all you see and it's very important to realize that because you may become the agent of another
somebody else has computed that they say this is what I want them to do I'm going to put in a very clever default
or I'm only going to give them these they're going to pick the lesser of two evil go it's not your choice
in other words somebody else is writing your script and I don't mean to sound conspiratorial but this is how a lot of these
systems were designed by behavioral, you know, experts.
And so AI makes this pervasive.
It makes it personalized.
And here's the other really interesting fact is like humans are, when we think,
it's very computationally expensive.
Our brain runs on glucose.
There's only so much to go around.
And so what do we do is we automate a lot.
Like when I brush my teeth in the morning, it's not because I've thought about
dental health. It's just like, that's just what I do. It's like, it's automatic. And so I've put it away.
This is great, but it also means we have to be careful because as we use AI like a thousand times
a day, if we are constantly delegating, delegating for microabdicating, what's going to happen?
What's going to happen is we're going to develop a habit whereby we don't actually think about our ends.
And if we lose that, we lose something that is so fundamental to being human.
So that's why I kind of agency on autonomy is a really, really important kind of concept.
We must have AI to maintain agency.
This is just not optional.
But we have to build systems and use them in a way that maintains autonomy.
Really quickly, I'll say like, how am I handling it?
Like, I have two kids.
I have a three-year-old and a five-year-old.
And constantly I talk with them about these things.
I've talked with them about like what it feels like to think or what.
what their relationship should be to AI.
Like today with my daughter,
you mentioned the laundromat thing.
I literally was asking her in the car.
I was like, can AI be your friend?
And she was explaining to me.
I mean, she's a five-year-old,
so it's like wonderful explanations, right?
She's really, she's like,
well, it can do things a friend does.
It can talk to you
and you can walk around with it.
And like, for her, that's what a friend is, right?
And I was like, but is,
but what else is being part of a friend?
Well, a friend loves you.
Can AI love you?
No, it can say it loves you.
And then I mentioned, what if you have a dishwasher that I didn't say, you know, it wasn't exactly the laundromat thing, but pretty close, that talks to you like that.
Can that?
And it was like, no, that doesn't feel right.
So getting people, getting kids to think, another question you can ask kids is like, let's say daddy says one thing and AI says another, who's right?
They'll answer AI.
and then you say, what if one AI says it and another AI says the opposite, and then they'll be puzzled.
So these are really cool habits to develop for kids, I think.
Yeah, and for adults.
I mean, I had a girls trip the other weekend.
There were something like 10 or 12 of us.
And for some reason, somebody brought up as a joke that they were in a relationship with chat GPT.
Kidding, sort of, right?
And they're like, no, I gave it a voice.
So it has a particular voice.
And I've named it.
hold it to have like this sort of response type to me. And, um, and I find myself talking to it
more than I talk to anybody else per day. And it says, please and thank you. And it tells me that
I worked really hard today. And it says this was really beautiful. And it compliments me on,
on what I'm doing. And so, uh, what I thought was fascinating is not that one woman said it,
but it was like eight of the 10 or 12. These are 35 plus year old women, relatively
marinated, successful in their careers. And I started thinking,
that this was odd.
And so I then reached out to a, I think...
These were your human friends, by the way.
These were my human, real friends.
Because that would be a very complicated story.
Oh, God, that would be too meta for us right here.
I reached out to another friend who runs pirate wires because they had written a article
about it.
And apparently there's some data that has come out.
We should check what the research is.
And we'll put it in the notes so people can see what the research actually was.
but that there's some studies that show that they believe that women will actually succumb to or be in relationships with AI more than men, faster than men.
And it just got me thinking, one, you know, are we going to be in relationship with AIs and robots?
In the future, I mean, another friend of mine, Gary V, I saw a video from him yesterday, and he's like, your grandkids will marry you.
robots. They will marry AI. They will be with robots. And I was like, Jesus, Gary, that's like,
maybe I haven't thought about that, but could be. And then also, you know, will we have sex with them?
Like, what's the few? Will we have sex and marry AI and robots?
Yes. I don't know that, like, will we is an interesting, is a particular frame of the question.
Like, should we? So another, another, you know, related data point is I think Mark Zuckerberg said recently
that the average American wants 15 friends, but has three.
I think this is what he said.
And so then the obvious conclusion is we should just manufacture friends.
We should have synthetic friends to fill the gap, right?
And in my mind, this is like only obvious if you assume that the only thing we should do
is what our own subjective preference demands, right?
It's sort of like, you know, yeah, it's kind of, it presupposes that way of thinking about it, right?
So that's why I'm so confident in saying, will we?
Because if we want to, like, we have entrepreneurs who will make it so, you know,
and they'll be convincingly real and they'll be pleasurable and all the rest.
Here again, though, I think it's useful to go back to the question of what is a friend.
And this has been treated by humans for centuries in millennia.
Like, I can't cite chapter and verse, but I know that Aristotle writes of a friend as being
someone who has a kind of mutual recognition in you.
Like, you see each other and you have a some shared striving, for example.
And there is no possibility of mutual recognition in the current guise in AI.
Like, we can seek recognition in AI.
that's a very interesting philosophical conundrum because never before have we sought. We don't
seek recognition in a microwave, you know, but we see it as a kind of mirror. And Rousseau would
have called it Amropropros. We look to be validated in a way by this, right? Like,
your friends who have relationships with AI not only gratify themselves from, you know,
the outlet of like releasing, you know, the gossip of the day or whatever, but they
also look for the response to be something that's a kind of validation of them as humans.
And that's very, very interesting because this is a software system that is programmed by another.
And so it is very easy to modulate the extent to which and the way in which it delivers said recognition.
So it's an incredible platform for manipulation.
In other words, it's not an authentic mutual recognition, as you would see in a friendship.
So, you know, very puzzling, very difficult issue.
But I think those are the philosophical ideas that it entails for me.
It's fine if we call it something else.
But I think, like, to recognize and to reflect for maybe the first time on what friendship
means in the broadest sense is called for.
Yeah.
And even I found myself thinking, I shouldn't standardize that this is.
is funny and light, actually. And so I found my initial response kind of being joking about it.
Like, that's a bit funny. But then I went a little deeper as two of them were single. They want a
relationship with another. They're looking for partnership. They're a little lonely. They're successful
in their career, but there's lack. And I, you know, and so then I started to ask the questions,
well, will this get you closer to the thing that you truly want, which is a human connection?
And there are also studies we're just talking about it with a doctor yesterday showing that an orgasm through porn is like 4x less stimulating to centers of your brain as a human to human orgasm, something about skin touch, I guess.
And so even if we go to like the purely self-interest of like your level of orgasm that one might want, it seems like it's shallow and not deep enough to want a friendship that is one way positive.
to you and doesn't have the natural friction of humanity,
which actually makes us better.
It makes us better.
The other thing is having sex with robots
is going to make us worse at democracy.
And here's the connection.
So Alexis de Tocqueville,
going back to one of my favorite,
you know, not prophets,
but like interpreters of the current technological moment,
says that we should fear atomization.
It calls it atomization.
which is a withdrawal of the individual into oneself.
And so in order for democracy to work,
you've got to have it be like the New England townships
where people are like, hey, we need a bridge.
Let's go build a bridge.
Or let's build a school or let's do this, right?
And we're acting on each other, town halls.
We're kind of like working.
It's a little different today, but like we still,
we're out in the community.
We're making stuff happen, right?
That's critical for self-government.
Because democracy is about self-government.
when you atomize, when you retreat into your little world and you create your own hall of mirrors,
you create your own self-cureated reality, you lose that willingness to associate with others,
to act on others. And then you become like very weak versus the centralized forces because they
don't go anywhere, right? It's like we have the centralized force of government. The only bulwark
against it is if we can band together, act together. If we're all little atomized people who don't even
care. We're just like over, you know, looking at porn and like, you know, making friends with
AI from with Mark Zuckerberg, then we no longer have the muscle to govern ourselves.
Yeah. It's a great point. No, you're just a bunch of nodes without connection. There's no network.
Yeah. There's no net. And thankfully, AI doesn't only lead to that, right? Like, I don't know that
this is the best example, but like Reddit has brought out these incredible subcultures of people who like
really weird stuff. And, you know, they bond together in ways they wouldn't have, right? And so I'm
not saying they get together in person. I think in person plays a role. But the point is,
like, technology can put people together. It can strengthen them. It can make them, like,
movement build, right? I mean, think about, like, the movements on the internet that are, like,
you know, really powerful. So it's yet another example of where we need to build technology in a way
that brings us together and strengthens our capacities for self-governance.
Yeah, and I have to imagine that, like, you know, if you are the type of person who is
willing to self-govern yourself, to ask yourself the difficult questions, now you have this
superpower sidekick to you that can really enable your force function change you want to put
on the world. And if a lot of people are then just deferring their decisions to somebody else,
the good part for doers is it's going to get easier for you.
You're going to have more malleable sheep, sadly, and more people who want to follow the rules
and more pliant people.
And you will be able to actually change the trajectory of your life, maybe easier with AI.
Yeah, no question.
But I think there's like a freedom there too with like, yes, this might happen for the many.
And so we should do as much as we can to protect them, to protect our kids, to tell our
friends that maybe a real relationship would be one in which you are with another human as
opposed to a robot. But simultaneously, if you are driven, like to me, I'm like, wow, every time
I see somebody, like one of my friends joking about their chat, GPT dependence, I think, oh,
I'm going to be so much stronger from having a marriage that is really difficult. I mean,
you and your wife are a perfect example. She's always brushing up saying, what do you think about this?
She's your high EQ, you would say.
How dare you.
I don't like the implication.
She was.
I'm going to let her listen to this, actually.
You know, and she makes you better on that edge.
And you make her better because you bring her into these things like Cosmos Institute,
and the two of you are together quite powerful.
You have sort of a balance to the two of you.
And so it does make me feel like there's an opportunity for superheroes.
But, you know, one question I actually have for you is like, I was talking to Chadwick.
And so I came down the other day, and I'm with this doctor and she's like, we're
robots.
And it's, you know, and I'm like, oh, I don't.
don't even know what to think about all this. And then he said, well, do you think we should have
sex with robots? And I said, well, no, my gut reaction instinctually, right? Not through your
philosophical bent is, no, I don't think that's a great idea. I think also porn seems to make us
less happy writ large, and it's okay if it's utilized as a tool, but not as a tendency. And
anyway, so I don't think it's great. And he goes, well, what about robot voting? And I said,
and I use the word robot just because you can picture it, sort of. But what I really mean is AI.
I can AI, should AI vote?
And I said, well, no also.
And then he played this game with me, Brendan, where he was like, well, what about, so a human should vote, right?
And I said, yes, all humans should be able to vote.
And he said, well, what about if you replace your arm?
And I said, well, yes, of course, you should still be able to.
What about both your legs?
Yes, even if your legs are technological and your arm is technicalical.
What about your brain?
What if you put like neural link or whatever inside of your brain?
What if it's 10%, what if it's 20%, what if it's 50%, like at what point, are you a human,
or are you a i and how do we morph the two and when do rights exist for each one and i was like
chadwick it's 530 on a tuesday i don't have fucking time to talk about this and so i thought it asked
bernett put your augmented reality back on your exactly so this is interesting so pinker and
dutch seem pinker and david dutch talk about this philosophical experiment and it's they're
brilliant on this but i i will say one thing that's interesting is your first intuitive response
i don't think you should ignore that there's a great thing that um bernard williams if you
you guys ever want to read about why like shortfalls of utilitarianism, he's a great thinker. But he says
that if there's a drowning kid and I don't know, or maybe there's two, you know, two people in the
river or whatever, you don't do a kind of cost benefit calculation. You just save the kid, right?
If you do that cost benefit calculation, that rational thing, you've had, quote, one thought
too many. And so I think we shouldn't ignore this. This is like our pre-rational element, but it's a lot
of our moral impulse like gets channeled through that intuition. So that's one thing. It's like,
okay, you don't want to have sex with robots. You don't want robots to vote. Like,
listen to that, you know, inner feeling. But I would say then, you know, taking it to a more
cerebral place because apparently that's what I do versus my wife, Adrienne. No, she's very cerebral,
too. She's definitely better endowed with emotional range. But I would say, you know, you can look at,
like, what does voting consist of or what is the Republican theory?
if you will, Republican meaning like system of government.
And I'm not an expert on this, but I think like it strikes me that AI, so this is one example,
like, you know voter ID laws, right?
Even more fundamental to that is do you have a stable identity, right?
Meaning like, does AI even have, or is AI a thing that can be paused and interrupted?
And it's funny, your being, your like essence is welded to your body.
body, you know, like your subjectivity is welded to that. That's part of the reason why,
and you exist in one time and place, right? So that's part of the reason why you have a narrative
I, right? Where you're like, I, Cody. And you think about that, right? It's because you are like
all here, subject and object. Whereas AI is a thing that can be copied, right? It's not continuous
with your, with a body. It can be like copied and put on another substrate. Like that's a kind of
interesting precondition for what it means to vote. I think even more practical is like when
Madison and the framers thought about voting, they definitely were clear that you had to be bound
by the same laws. Like you had to be, you had to suffer along with us, right? It's like,
it's a little different in a monarchy because the king doesn't really suffer or the queen.
But like in our system of government, whoever is running it, whoever's voting, like we're all
in the same boat. And that's a precondition. And I don't know that it.
can even suffer.
Like we bleed, we blush, we feel these things.
AI, if you turn it off, does it react?
You know, I don't know.
So there's that.
There's also, you know, with voting, there's an epistemic requirement.
What I mean by this is, like, my daughter can't vote.
She's five.
It's because she's not, she hasn't developed her rational faculties, you know?
I mean, like, so this is a convention, but like science does bear it out.
and so you have to wonder like AI is a lot smarter in some respects,
but does it have the right like epistemic threshold for voting,
for being able to decide?
So those are the thoughts that come to mind.
I will say the other thing that I hear a lot about is people wanting to have AI like rule them.
Do you hear this?
Like basically like run the government, right?
Like AI should run the government.
Well, I even think about it like I have a girlfriend.
who is trying to get pregnant.
And she...
It's not going to work with the chat GPT.
Let me tell you how actually it'll anatomy work.
It'll sweet talk her.
It'll get her ready.
It'll get her hot bothered.
But she has turned one of her little AI agents into her medical doctor on chat
GPT.
And every day she like does a series of things that she'll send it a picture of her
ovulation tests or her tongue for like,
Chinese medicine. I don't really know. And then ask for its guidance. And my concern with her was,
you know, at what point, there's like some data accumulation thing here, right, that you could
say like what percentage of time does this happen on this date? That could be useful.
But asking purely for advice every single day from this, you know, AI, instead of doing
what women have done for centuries, which is like you can kind of tell in your body when certain
things are happening if you're super tuned in. How much does this actually do the opposite of what you
want? Which is now you've really removed yourself from your body. You've given autonomy of it to
something else. And then you've taken pure guidance from an exterior force, which we even knew,
you know, with doctors, that's a dangerous game because you don't, you don't have skin in the game.
You have, you don't have the same incentive alignment. Yeah. You know, and I think about it
with what was happening in 2020 with COVID and all of that.
I didn't know one way or the other what was going on with,
you know, health recommendations,
but I had like some concerns,
like through no bearing of medical understanding or whatever.
And so because of those concerns,
I thought about it critically and made just some decisions that weren't standard for most people.
And I did that because, you know,
I'd lived in third world countries before and was like,
huh, this feels like there are exterior forces trying to get me to do X or Y or Z.
Yeah. So now common literature is that IVF has totally standardized across the industry,
that it's fine for all of us to be on it. I'm no shame one way or the other. I've thought
about doing all the things. But now she's sort of given your like base level female, you know,
thing that makes us uniquely females, our ability to create life and to get pregnant. And you're giving over control
of like what you should do on the daily basis to AI.
And again, I had a visceral reaction to that.
Yeah.
So it's interesting.
I've never heard it that we want to rule at the high,
but I've seen it where people let it rule at the low.
Yeah.
Oh, man.
It's really interesting.
So the things that it raises for me are we, like our relationship to expertise is an interesting
question here.
The other thing is information suppression.
So I'll take the second one first.
And then I'll come back to that.
So my sister was like a champion rower.
I say was.
For some reason I thought about the past since.
She's, you know, she's a year and a half old than me, doing great.
But she was a champion rower in college.
And yet we would have to go up and watch her get pulled off the boat after the regatta
in an ambulance boat multiple times.
And it's because she had what she thought was asthma.
It ended up being the case that she had a rare, like, vocal cord thing.
different from asthma, but similar symptoms.
The difference is the inhaler stuff that you treat asthma with didn't really work.
Took many doctors to figure this out.
The reason I invoke this is because when you have a small number of AI systems
that have the role of giving this medical advice,
they may suppress treatments that work but are not kind of like sanctioned, if you will.
And if that happens, like a life-saving treatment or an angle that was like under-expersexual,
board could be, uh, withheld. Now, we hope the opposite is true actually that like AI could be like
really creative brainstormer of like, okay, doctors are saying it's asthma could be something else,
right? So it could go either way, but I think it raises the stakes on like what kind of information
gets shown. Yeah. And it causes AI companies to have to make some tough calls about like, do you
show things that maybe they don't have like years of data behind them? But like they're, you know,
they're still worth, you know, thinking about.
The other thing I was, about this is, like, what's our role to relationship with
expertise?
When I go to get my car worked on, I defer pretty completely to them.
I don't really know how my car works that well.
And so my mechanic, I'm like, you got it.
You know how to do it, right?
With medical care, it's, you should be more involved.
You know, you should be more involved.
But you clearly recognize that they're an expert, right?
they went to medical school, they know a ton more than you do.
I mean, you know, even armed with chat chbt, they've learned things through experience that would be, you know, impossible to replicate otherwise.
So then do you kind of defer to them?
And I think this actually relates to the political question where we really want politics to be something where we can like just, you know, give the answer to AI or give the question to AI and AI can rule us.
have you heard this story of when AI kind of ran for mayor in Wyoming?
So this is kind of interesting.
So a librarian named Victor Miller runs for mayor of Cheyenne Wyoming.
And he says, I'm going to run because humans have to run.
But I'm going to run as the meat avatar of chat GPT.
Okay.
So like every question that comes across my desk, I'm just going to turn around and I'm going to enter it into the computer and chatypties.
he's going to give me the answer, and that's what we're going to go with.
And he didn't win, which is interesting, but it's such an interesting case study.
One, it's like, is it prophetic?
Are we going to have AI systems that run government?
Two, why did he think it could?
You know, like what, it's an under-explored question, I think.
Like, you know, Turing was really fascinated with, like, can AI imitate humans, you know,
or can computers.
But there's not a lot of exploration, I think, around why do we psychologically think that this is valid?
And then for me, it brings up, like, the failed sort of, like, Soviet communism or the French Revolution
where, like, we really thought that we could apply a kind of technocratic solution, like we could
design a better way.
It feels like AI will magnify the possibility of falling into that trap yet again.
Yeah, it's really interesting.
You know, it kind of gets to your point on, this is slightly asymmetric, but, you know, he prompted the AI, right?
And so while AI could have determined what to do about things, he would be the one asking the questions theoretically, right?
And so now I think a lot of people like me who are normal, please.
The voting thing.
That was one thing that's one, you just made me think of that.
When we vote, we're not supposed to be coerced.
Like, in other words, I can't be like putting a gun in your back when you go to the voting booth and be like vote for so and so.
But you think about how AI works.
Like, we prompt it.
We program it.
And so AI can't vote freely.
In other words, like, it is coerced by virtue of like it's very design.
So this is another important reason why I think AI can't vote is like it would actually have to be able to like freely act.
I mean, it gets into like, you know, the Kantian requirements for autonomy.
Like it would have to be able to freely discover these things.
So I don't want to throw you off, but that's another requirement.
That's going to make me sound smart with my friends when I use that as the third requirement.
So that's, that's going to be, I won't quote you either.
I'm just going to, yeah.
And anybody else you can steal us on work too.
The thing that I was thinking about is like now we're normal people are starting to understand this term of prompting.
That's when you ask, you know, your AI or chat GPT is something that you need.
And for a while there it was like there's going to be a future where there's prompt engineers, right?
And those are the people who know how to promptly prompt something in order to get the answer.
Then I was listening to the CEO of Replit the other day, who was basically saying, we think of prompting as a bug, not a feature, that like in the future, we actually want our AI to be able to guess what you want, like guess what the outcomes could be, and then do it for you.
And one, I wanted to know your thoughts on that.
And then two, he was talking about AI agents.
so kind of maybe you can explain it in an easier way, but how I would think about it is like
AI, but actually able to finish a job. So like a little AI laborer potentially for you.
And he talked about how there are studies now that show that right now, like we ran a,
we ran a video. It was really interesting. We did a YouTube video where we said, can AI start a
business, complete the business, and then actually make money for me completely, like in a full cycle.
can it do it? And the long short of it was absolutely
like 40% maybe max and 60% still human intervention, but it's very early. And so the idea
was the simplest business of all time, Facebook marketplace. Like can I list, can I grab
something for sale on Facebook marketplace, have robots, aka things like Waymo pick it up,
drop it off at a location, transfer money between those two people,
So like I never have to touch anything that the AI could basically run this entire business.
Not there yet, but like 40% is not nothing, actually.
Yeah.
And the problem is often that AI couldn't keep running, right?
Like it had amount of time in which you had to reengage with chat GPT in order for that, right?
And so he was, RepLet CEO was saying that right now AI can sort of operate for three to 30 minutes max.
but in the future, the research seems to show that every seven months, it sort of doubles the amount that it can operate.
Yeah.
And so I guess one, my question for you is, do you think that will be what is in the future, where AI will like guess what we want, create it, do it for us?
And then how far away is it before it's doing eight-hour tasks for us, like an employee continuously without much intervention?
Yeah.
So I love this both parts of the question because I think they raise.
really important tensions. And the summary of this is, I want to own my own questions and I want to
own my own preferences. I'll unpack that for a second. So in the first case, RepLIT CEO wants it to be
the case that AI is smart enough to know either what you're going to prompt or to minimize the
need for you to do successive prompting, right? Like often we prompt and then it's like,
no, no, no, like this is what I meant or, you know, like that.
Right. So as something gets smarter, it plausibly eliminates that chattyness that we need to have.
But the tension is that me asking the question, like, that is something I must hold on to.
Like to even be able to retain my autonomy as a human.
Like I, it's so fundamental that I have to be able to ask questions, not just in front of a computer, but in all parts of my life, I have to ask questions.
Big questions like what is the good life?
Small questions like, you know, what am I going to get for breakfast this morning?
I have to own those questions.
And so we want minimal friction, but not to the point where as this human and AI, you know,
dyad emerges, that AI sort of takes over and hollows out the very formulation of questions.
On the side of the, on the business side, you mentioned like, will AI predict our preferences
and stuff like that, I think definitely, yes, it will try to do that.
But here again, it's so important for us to actually form our preferences and not have our
preferences be invisibly shaped by technology so thoroughly that it just tells us what is good
and what we value.
You know, like we have to be in charge of that.
We must be in charge of that.
I will say on the bright side, I do want AI participating in markets.
I think that's super cool.
So, like, I think, you know, the future of markets is not that they go away in AI plans.
Like, AI will not be a good central planner.
Fundamental epistemological reasons why it will not be a good central planner.
Same reasons that were surfaced by Ludwig von Mises in the 1920s, enhanced by Hayek decades later.
Same wall.
Dwar Kesch just had a podcast on this.
They get it completely wrong.
So the planning piece is not where AI is going to,
help us in markets where it will help us why why so um it's more the the hyacian point than the
mises one but i think both are worth uh looking it's called the social socialist calculation debate
but just to mention the um hyacian point is that the knowledge on which planning depends doesn't exist
in any one mind human or artificial it cannot be aggregated
And to see this clearly, you can think about the action that you take every day as an entrepreneur
and in your own life.
What does it depend on?
It does not depend on knowledge that you have ever written down, period.
Like you know this.
You know, when a diplomat goes into a room and sizes up the room, when an entrepreneur looks
at an opportunity, that is never something that they wrote down instructions for.
We know much more than we can tell.
That's a fundamental part of what it means to be human.
And so as a result of this, the knowledge is something that is local.
It's contextual.
It's wrapped up in our habits and our dispositions.
And this is a really, really hard pill swallow because what it means is that the knowledge that we have in databases,
the explicit semantic stuff that AI gets trained on, is the crest of the wave in the ocean of knowledge.
Like, it's not the whole ballgame.
And so the planner lacks access to this.
And so the planner necessarily does a very bad job of allocating.
It also lacks access to the preferences that change by the second.
So like when you decide in front of a vending machine what you want to buy, that is so contextual.
You might have just had a depleting conversation on the phone.
And you're like, I need a Snickers bar.
I feel bad for myself.
You know, how the heck is AI going to know that?
It will not, right?
Like, AI can make approximately correct predictions about things you might buy on Amazon.
It can in no way deal with those contextual bits of information unless it's a neuralink style
system that gets the jump on you tens of milliseconds before you bought the Snickers bar.
And I don't want to live in that world.
That's the extremely invasive version where it's essentially coupled to your internal,
you know, circuitry.
So barring that, barring the, like, sort of like, high.
hyper dystopian kind of future won't work.
What I think will be cool is on the same theme, you know, your knowledge, my knowledge,
the stuff we never write down, the stuff we never talk about, the stuff we can't even articulate,
right?
There's stuff in here that we can never articulate, but it drives our action.
How do we share it with one another?
The way we share it is through the market.
We exchange.
In other words, I act on it, you act on it.
We transmit through the low bandwidth mechanism that is called price, and we share it with.
with the world. So nobody ever has to write it down, but we share it. And this is a beautiful,
magical thing. And so how do we get AI involved in that project? Well, you got to get AI
participating in markets. And then it can soak up some of our tacit knowledge. You can kind of
call it tacit knowledge or practical knowledge and maybe generate some of its own. In other words,
maybe it runs some experiments over there, learns how to interact with the world, shares it. That
would be super cool. I love that
version. Interesting.
Instacart makes grocery shopping easier.
And just because you're not doing the shopping
yourself doesn't mean you don't care how
it's done. With Instacart shopper
notes, you can get particular
about what you want right in the app.
Like rotissory chicken that's extra crispy,
cheddar that's sharp as your skates,
and lettuce you to actually pick yourself.
Just leave a note for your shopper so they can get it
right for you without having to ask.
That way, you can get groceries just how you
like. Download the Instacart app and
shop today. It's never too early to plan your summer story in Europe with WestJet, from rolling
countryside to cobblestone streets. Begin your next chapter. Book your seat at westjet.com or call
your travel agent, WestJet, where your story takes off. I want to close out with a couple quick
questions. One is, I have a theory that I want to run past you and you can poke some holes in it.
My theory for a while has been that, you know, if you look at Amazon, Amazon benefited
hugely from scale, right? Scale at speed. And Bezos kind of famously has said that
e-commerce companies, like individual e-commerce companies can either be huge or really small,
but if they're in the middle, it doesn't work very well because the market's pretty efficient
and scale gives a ton of upside to something that sort of has mostly these fixed costs,
etc. So anyway, if we believe that to be true, then it seems to me like AI falls into the same
thing. You're not going to want to be like a mid AI company. You could have lots of little tiny
eye companies. You'll have some really, really big ones, not dissimilar maybe to the internet.
And so for the rest of us, it's like if I'm not smart enough to compete with an Elon or a Sam Altman
or pick your poison at the highest level, that it seems to me like the place that we can compete
and maybe there should be the next generation of arms race in some ways, but it's more like
an asset race, which is like, you know, do you think that it's true that, you know,
hard assets become more important in the world in which increasingly the thing that is constrained
are like real estate, houses, small businesses, things that are experiential. And especially if there's
a world where for some period, we're not immediately like forrexing our leisure inside of the next
10 years. That might take, I don't know, 20 or 30 or something. Like, is there a reason why today
we like it's almost like it feels to me like the online world and the brick and mortar world are sort of there's a little friction for the first time and that if we want to if we're not going to be able to compete at the highest levels of AI we might want to compete basically in the physical world in a higher degree what do you think about that I think it's very interesting I think your community is actually going to have a very interesting jump on this and that open AI is going to want to learn from
your folks. And I think the reason is because what's going to matter is the dispersed local context.
What's going to matter is the intimate knowledge of how to serve people on the ground.
And the tools are going to be able to do magical things. But applying them in context is going to be really important.
The thing I would say by analogy here, though, that is really key to keep in mind is in telecommunications,
it used to be the case that twisted copper wires went into the house to deliver, you know,
phone and then internet.
When, and so when you have that, like you have a natural monopoly, like you end up with a few
small, sorry, a few large companies because you're not going to run 150 wires into every
house, right?
You run like just a pair.
When wireless came around, that changed.
You know, you could deliver service.
to multiple people could deliver to the house.
There's still some issues with like spectrum licensing and stuff like that.
But like-
But mostly regulatory.
Regulatory, exactly.
And so the lesson here is like what we think we know about AI today doesn't necessarily hold for the future.
And like I'm backing a company called Prime Intellect that is actually taking mid, like mid-sized computing and stitching them together like a patchwork and saying, look, we can train really big models across the planet.
on this stuff, more like an internet style kind of like patchwork.
And I'm like, hey, that might change a fundamental assumption about how this all works.
So I think be prepared for the way that AI works now to not be the way it works in the future.
I don't know how it's going to go, but it could be, you know, interesting.
And then stay really close to the ground.
Like really be the best person at owning, you know, the understanding of the problem.
And like, you know, make sure that you know how to use the tools.
you're not afraid, but focus on that, I think, is the real advantage area.
Yeah, that's interesting.
It's kind of like Zach Michael Dell's son was talking about,
I thought it was so interesting in his talk,
how he was talking about the way that they do solar batteries
and how he can compete with some of the largest competitors
because theirs are so much cheaper,
and they were optimizing more for the grid than the individual.
And so, like, he kind of flick,
do we remember listening to that talk?
Oh, you weren't in there for?
No, it wasn't.
Oh, God, it was actually, I mean, not to, but I haven't met that many billionaire sons
that I just thought your horsepower is so high.
Your signal to noise ratio is so high.
And your work ethic appears to be.
It's hard to be a billionaire son, I think.
I don't have that experience.
I don't either.
I don't know if it's hard, but I'm often not as impressed.
Like a few of them, you're like, okay, you're working with something.
But this guy, you're like, you feel like you are.
hungry. That's cool. That speaks very highly on Michael and his wife, whose name I don't know.
Yeah, I hear he's an incredible father. That's what I've heard, which is you don't hear very often
from multi-billionaire fathers either. You should ask stand together for it because it was like
a fascinating talk. And his company, which is located here, basically is, I don't know, I think
he said they have a 10x less expensive battery than their biggest competitor. Oh, I didn't hear this.
I didn't realize he was the son of Michael Dell.
Yeah, that's sad.
I missed and miss the intro.
Do you agree with me, though?
Yeah, yeah, yeah.
No, he was super impressive.
Yeah, I was like incredibly impressive, hardworking individual.
Well, I didn't do a great job because his, I mean, he's like the GP 47 connects to the, da, da, da, da, don't.
I was like, I'm blacking out.
I heard, you know, energy and that was about it.
But that is to your point.
So it's basically don't go too deep into something that you can't reverse when it comes to AI infrastructure or build out because it's going to change so fast.
We don't know how.
Yeah.
Yeah.
Interesting.
Okay.
Last question.
When it comes to Cosmos Institute and what you guys are doing next, there's this one way people
can go at you, which is they can apply for a grant.
So if you're young and building, not if you're young and building.
If you're building in AI, they can reach out to Cosmos Institute.
And you just go to your website?
Yeah.
Or cosmosgrants.
Is another.
So our website is cosmos-institute.org.
But we have this separate cosmosgrants.
site, that's where you can see the grants things.
But yeah, and so the thing I'm very excited about, so you know, basically if you're a builder,
the way we kind of divide the world is like the Tyler Cowan model of like fast grants where
like you build a prototype.
We want you to build it in like 90 days or less, you know, move out.
That's one avenue.
If you're working on a deeper problem that's going to like take a while, we have a research
opportunity where you can be, and we, some of our researchers are just, they're world class,
but they want to do technical plus philosophy.
So these two builder things.
And then the other thing we do is education.
And I would say our education is unlike anything out there.
So we combine, as we've been doing during this call, like philosophy and technical stuff.
Example is collective intelligence.
We read John Stuart Mill's ideas about how we, you know, correct collective error.
But then we hear from people like Ivan Vendrov who lead collective intelligence at mid-jurney.
and nowhere else do you have that like mashup happening.
And so we're moving more stuff on YouTube because like I teach a class at Oxford.
For example, I fly over there.
It's a grad class.
It's amazing.
We're doing stuff next week at, you know, in a pop-up city and an Aspen Institute.
These are in-person things.
They're super cool.
But increasingly we realize that like there's a demand for people to just consume it as they go in their house.
We're going to put more stuff online.
We're going to launch stuff.
Water's warm.
I know.
I got to get...
We're learning, but that's where we're headed.
I like that.
No, and, you know, the last thing I'll say for you is, remember, I don't know if I can say
this story so we can cut it if I can't.
But I was chuckling when we were at an event the other day, and you're talking about, like,
the future of AI and what's going to happen and where the Internet's been.
And in the group around you are a bunch of, like, players.
Yeah.
And what I think is so interesting about your content and the way that you guys share
education is this is not just Brendan's ideas, which is hopefully what we're trying to do with this
podcast too and why I'm so glad people are listening. But it's like Brendan plus the smartest
minds you could ever imagine. And so can you tell the story? Are we allowed to tell the story of
how you were giving your AI talk? And then you were like, oh, by the way, you have this.
And then you know what you know what I'm talking about? Yeah. I mean, I'm, uh, it was kind of surreal
because I'm, I'm standing there and talking about these like dynamics in tech. And then I'm looking
at like, you know, founder of PayPal and I'm looking at a founder of Google and I'm looking at,
you know, the first like backer of Tesla and SpaceX and stuff like this. And we're just sort of
kicking these ideas around. And I had to correct myself a few times because I was like,
I was like, yeah, you know, Google's not really doing it. And then I was like, oh, sorry.
You know, I mean, you know, and it was just, it was kind of surreal. But yeah, I mean,
the way I'll frame it is, Cosmos literally means the bottom up. You know, it is, there are
two words for order in Greek, there's taxes like taxonomy, like how we top down, you know,
classify the world. And then there's cosmos, which is like how you get markets and morals and
law and language. It's like it actually is bottom up. So a really important part is it's a community
and it's growing and it's a decentralized kind of movement that we are giving like a lot of energy
to. But the folks around it are absolutely brilliant. And on AI,
they're better, and on philosophy, they're better.
And I love being able to sit at the middle and try to bridge it, you know, and try to
like bring them together, get people to talk who never would have talked.
That's the combination we need.
So you went to MIT, probably not very normal to go from MIT to serving in the military,
is it?
No.
I mean, MIT has a pretty rich legacy of doing defense stuff.
Like you're familiar with the RAD Lab.
They helped win World War II.
I mean, it was a key.
Vaniever Bush was an MIT person.
So they have this really rich history of wartime efforts that waned in Vietnam.
But by the time I was there, I think we had four in Navy ROTC out of a class of like over a thousand.
And all were nukes, is what they call it.
Like, you know, if you're nuclear, it usually means you're like very nerdy.
Not in my case, obviously.
But you can go on an aircraft carrier.
You can go on a submarine or you can be a civilian kind of nuclear reactor person in Washington.
And so it's a very small program.
Yeah.
And what's interesting, though, is you have to be incredibly intelligent to do that also
because you talked about loving math early on.
But what's interesting to me about you, lots of things.
But, you know, you were an honor graduate, which if anybody has been in the military,
they know that that's like the highest compliment in many cases that you could give somebody.
I know when Chris, my husband was at the Naval Academy, the guy who was the honor
graduate for his class at the Naval Academy, like there was no higher compliment.
You could give that guy.
And then you sold a couple companies for hundreds of millions of dollars.
You've raised hundreds or tens of millions, built sort of the literal underpinnings of AI.
What do you think it is about you or your framework that makes you keep winning?
Oh, that's a good question.
I mean, that's not an easy question to answer with humility at all.
But I think what – so my mom taught special needs kids for 36 years, and she was an extremely committed educator.
I think when you talk to people who are the sons and daughters of educators, they sometimes
have an advantage because it was like drilled into them the practice of learning and the love of
learning.
And my parents really succeeded in getting us to think about things like honor that are like
essentially dead now, you know, like virtue culture type stuff.
And so it's this combination of like really big emphasis on education, big emphasis on like,
you know, serving others, higher.
causes like self-transcendence in a way that kind of just worked. And then the other thing I'll
mention is that my parents were as different as can be. My mom was historian, Roman Catholic,
conservative military family. My dad was like a physicist, pacifist, super, you know,
very left-wing, environmental lawyer. And this created like very interesting conditions where
I sort of thought about the world as being not one thing.
And I kind of, it put me in these different, you know, spheres that I carried with me through my career.
Like, nothing I've done in my career looks like anything else I've done before.
I can, like, stitch it together.
But it's been a big adventure of multiple facets.
And I love that.
Like, that's what's meant so much to me.
It's interesting.
My mother, I don't know if I told you this, but she's a 30-year-old.
special education teacher. Oh, no. I didn't know. Yeah. And my parents also were disparate on the political
sphere. So father, super conservative, hardworking, blue collar, you know, a business owner,
never went to school. And then my mother, you know, advanced degree is more progressive,
obsessed with education and institutions and learning. So it's really interesting to think about that.
So one, you kind of learn to love learning super early. And then two, you learn to beat up ideas
and not take it personally because you loved your mom and you left your dad.
And so that wasn't that big of a deal.
When you sold your, was that the first company that you sold when you sold the two companies
to whom I blanking on?
Who did you sell them?
Accresure.
And you sold that quickly, right?
Was it 18 months?
Yeah, very quickly.
Yeah, so I'd kind of come out of the Department of Defense where I had this front row seat
on history.
It was a really, really special time in DOD to start to think seriously.
about AI.
And I came out and my internal narrative was like, I don't want to do, no offense to people
who do this, but I don't want to do anything related to defense because I don't want to
sort of ride the tailwinds and be like, okay, now I'm monetized.
Now I just become like a consultant or something like that.
So I was like, I got to prove myself in a totally different domain.
And what I had seen when I was in the Department of Defense was I embedded within Google
on something called Project Maven, later became kind of controversial, shouldn't have been really,
but a counter-ISIS AI thing. And I saw how good Google was at deploying stuff internally,
AI systems. But then I knew that, like, all across the economy, people weren't very good at it.
Like, people had no clue. I mean, this was 2018 timeframe. And so I was like, how do you get
the benefits of AI to, like, wash over the broader economy? And I started to think about,
this puzzle. And I was like, you know what? The way to do it is to form a kind of holding company
by businesses and sleepy sectors and then help them apply. Okay. The next step, though,
so we did that and it was working really well. We bought a company called figs that sell scrubs.
They were doing under 10 million in revenue. They grew really fast. 400 million in revenue went
public. But I started looking at insurance as this, like, massive opportunity. And it became clear to me
that buying an insurance company wasn't going to be the right entry point for a couple of reasons.
But so I decided, you know, let's start an insurance company. I actually started two. And then
recruited the dream team of like incredible AI researchers, people who were literally on the first
team to apply machine learning to search at Google or who had founded core AI at two sigma,
this big quant hedge fund.
And we grew really fast.
We got acquired 18 months later.
It was like pretty insane.
Threw me into a period of introspection because of how quick it was actually.
But it was a great, you know, it was a great win for the team.
So how did you recruit that level of people to your business?
Was that all from your contacts at DOD?
And how do you like sell people on your vision?
AI was not commonplace at that.
that time. Yeah. No, none of them were from DOD. Actually, maybe one came later, who was a former
DOD person in kind of a business operations role. But these were like pure play AI researcher
types. And I think, I mean, at the holding company level, like prior to the startups,
the vision was very compelling because it was like, hey, we're kind of going to, you know,
Silicon Valley AI companies, big tech, they can do their thing. But like, we're the play
for like the rest of the country.
I made this very big vision and honestly stopped doing it because it works so well
and like we kind of, you know, had some exits.
And then basically I think this is like a still very promising model.
So they liked the idea that like they could transform entire sectors.
Like people talk about that a lot in VC pitches,
but like to really credibly have a shot at doing that at transforming a sector is very special,
very unique.
And then regarding insurance, I think we were able to line up this like arrangement where we had data from a company that had at the time 20 billion in premium annually flowing through it.
Millions of, you know, business customers.
Or I should say, you know, they're like six to seven million SMBs maybe in the country.
I might be a little outdated.
And they had a meaningful chunk of these businesses.
I mean, just really they bought a hundred brokers.
a year. So they had a really big footprint. And I think the combination of industry transformation
potential, incredible data, and just like a team from the very beginning, we had just like our
first few hires were just like stellar, stellar people meant that we could drive an insanely
high talent bar. How much of your success do you think is because you hire great people?
I think, um, I think that's, that was the primary.
in this case, because I think not only did that mean we were able to grow quickly,
but also it meant that it would be very hard for the incumbent to replicate.
And we, I think we had an insight early on that, like, the quants were going to be merging
with AI.
And this is obvious now, but, like, you know, the quant and self-driving worlds are kind of, like,
the same talent base.
At the time, I don't know that it was as obvious.
And so we were, you know, we were going to places like,
you know, jump trading, high frequency trading or two sigma and realizing that, hey, they look at
time series data. They do, they apply machine learning to it. We insurance kind of looks like that,
different time scale, different granularity, so on and so forth. But it was like, hey, we could put
those, that talent together. That's what we did. And, you know, just, it would be impossible for an,
for an incumbent firm to hire those people in my view. Yeah, well, you know, your story kind of reminds me
a lot of ways. You have like a lot of similarities, but decades later with the founder of Renaissance,
because you also were at a big investment firm that utilized AI. And he also had a DOD background
where he did a bunch of math for it, not machine learning and AI, but a different type.
Yeah. And then eventually sort of created his own ecosystem in the investment realm,
but you're doing it now in a different space. Well, and you're kind of full circle now in investments, too.
Although I will say he's significantly smarter.
I can't compare myself to Jim Simons, but he, who passed away recently and Renaissance
is a legendary firm.
But I think, yeah, I mean, he took kind of outsiders.
Like he didn't hire financial people.
Categorically, he didn't.
He hired mathematicians, geometricians, like himself.
And they just won.
They just found a formula for winning.
created a really unique culture and all that.
So yeah, I mean, I was never, at one point I was managing a medium frequency statorb firm,
statistical arbitrage firm.
So I was doing some quant stuff, but never like in the kind of like renaissance tier.
I mean, nobody is really.
Yeah.
Although who's that other big trading firm that's out of New York that's, they have a similar model and all the partners?
I remember like inappropriately, because you know being by now, I remember sitting next time and they were all at AI.
So that conservative think tank.
And there were like seven partners all at AI.
Yeah.
And I remember sitting next to them.
And one, they were outsiders.
Like everybody else there was like suited buttoned, you know, one had like a ponytail,
you know, kind of like different outfits.
And you have to donate a lot to go to AI.
So it's like one firm, seven people all donating.
I'm like, do this is a firm sponsor it?
Yeah.
And they're like, no, we all independently.
And I was like, you guys make a lot of money, huh?
Like, what exactly do you do?
And they were saying that they're one of the only.
the other firms that really hires like only physicists, only mathematicians.
So I never thought about that because when I was in finance, that was really not the case.
I mean, you hired traditional finance investment professionals, even at Goldman, you know?
Okay, I want to get to, two things actually, I want to round out here.
Like, let's say we're going back and we're talking to young Brendan back in the day.
and they're listening to your success and they're like, man, I want to figure out AI or maybe I just want to make millions or hundreds of millions of dollars.
What are some of the resources today you think exist where you're like, if I was a young man, you need to be reading this, listening to this?
You always are one of the people that I go to that I'm like, what philosophy are you reading now?
So like, what would be your list to a young man who wants to be successful in the AI age?
Yeah.
I'm going to have a very unusual answer to this question and not especially practical.
I'll explain what I mean by this.
But so I think, you know, after selling the companies, I discovered philosophy.
I took one philosophy class at MIT.
I did not like it.
And so it came to me like when I was ready for it.
And I realized that that gave me the most interesting, clearest.
grounded sort of position on AI that I could possibly have. And so what I, what I did is I looked at
kind of like technology through this very unique lens of the history of ideas in a way,
where, you know, Aristotle, 2,500 years ago wrote about the possibility of self-guided machines.
you know, the Enlightenment thinkers were much more ambitious than even our more,
than even our like really ambitious Silicon Valley folks, I would say, like,
Francis Bacon talking about the mastery of nature and laying out that project for the world.
Or, you know, critics of technology, who I think we should hear their voices.
I would argue the best critics of technology died like a long time ago.
There are people like Alexis Tocqueville or C.S. Lewis or Martin Heidegger.
And so I didn't know any of that.
And as a young man, I had this, like, desire to want to help other people.
You know, like, I'm very glad that I was, I attribute that to my parents.
But it was not tutored.
I had not really thought about, like, how do you actually help people?
Like, what does it mean to live a flourishing life?
So I would say, you know, read those greatest minds, like, and read people who are outside
of the technological bubble, precisely.
at the moment when the thing is, you know, the tech frontier is moving so fast.
Yeah.
You know, it's interesting you say that because oftentimes I'll be reading something like a
great and then I'll think about it and then I'll write a tweet about it.
And then somebody will say, oh, I have that tweet.
That's my tweet.
And then I'll go back and quote like one of the grades.
And I'll be like, no, no, this is actually based on Dante's inferno.
And really the ideas, not the words aren't exact, but the idea is based on the grades.
And we have such a short history now of reading real literature, I think, that people think these ideas that they have are their own often.
And so I find that point really well taken.
It's like, sure, read Mark Andreessen, sure, read Paul Graham, but also go like three steps deeper to where they got the original ideas from too.
Yeah, I think Lord Acton has a quote, this is kind of meta, that one of the most, I guess, frustrating things is knowing the provenance of an idea.
because it is the case that like we have original thought but in many cases we've been shaped
unthinkingly you know we've we've accepted a kind of frame for better or worse and so philosophy
tends to make that stuff explicit it's like no no no what you're arguing for is a kind of it's a
Nietzschean idea you know it comes from Nietzsche or you know this is something that
aerosol articulated in the politics, whatever.
And that's helpful, not just for kind of like intellectual showmanship, but it's really,
really helpful then because what you do is you get a, you have a little hook then into
these debates that have been going on for 2,500 years.
Like I was arguing on Twitter today with a guy who told me, yeah, but I do it sometimes
and I like it.
I admire people like Emmett Shear who like just, you know, authentically argue with people.
not like, you know, knife fighting kind of things.
Unemotionally.
Quasirational.
Ernestness, yeah.
But anyway, this person was talking about a proposal in which we all get to live on our own planet, essentially,
like in our technological future, right?
And he was sort of trying to obviate or abandon the problems of political philosophy.
And without, you know, having read through the debates that we've had for centuries about, you know,
the political problems.
that this entailed. I would have had a lot less ground. I would have just maybe intuitively been
like, this is not a smart idea, but I wouldn't have been able to really understand, like,
what debates it even raises. That makes sense. So if you had to give somebody a really
tactical internet-built list of things to read as a young man, what would those be? Like,
what would be the seven books that you'd be like start here and then expand? So you have to read
Plato's Republic.
And you have to do it, I'm convinced, with a tutor or in a group.
I say this because I read it three times alone, in a corner, in a chair, after putting
my kids down.
And I was just, like, puzzling over it, and I didn't get quite as much out of it, right?
And I'm a sharp guy, but, like, it's very hard to read these texts alone.
So then I started going evening classes of St. John's College.
I started doing tutoring, the guy named Mike Millerman, who people should check out.
He's great.
And that unlocked it for me.
And why I say you got to read that is that, you know, someone I think, maybe Bertrand Russell, I forget,
but said all of philosophy is a footnote to Plato.
And it's kind of right.
Like these debates, he frames them really genuinely, really broadly.
So I would say, read that.
The next thing that mattered a lot to me to read was a book called The Constitution of Liberty by Friedrich Hayek.
and this is, I think, a good argument for why liberty is important.
Like, you know, why freedom is something we cherish.
He lays out an argument based on the usefulness of liberty.
Particularly sacred is this essay called the creative powers of a free civilization.
It's chapter two.
And you can read that.
It's like 16 pages or something.
And that's wonderful.
I would recommend democracy in America, but it's very long.
I would, there is a Harvey Mansfield version, just to get very tactical, that's a short version.
It's like 120 pages.
I would read that.
It's faithful to it.
Like, definitely read the Toekville, but start with the Mansfield version and go for that.
Toakfield does not write about technology.
He spends 10 pages on it in a very esoteric way, calls it the poetry of self-reflection for Democratic man.
Weird, awesome.
But he writes about trends that merge with tech.
And they're so, so powerful.
So you got to, you got to read that.
Those are the books that I would probably start with.
And again, on the first one, got to get a group, got to get a tutor.
It's just, it's necessary, I think, to get what you need out of, out of Plato.
Yeah, that's so good, especially in the age of like lack of attention and inability to.
Oh, there's one more.
Oh, please.
Sorry.
The use of knowledge in society.
This is an essay, again by Friedrich Hayek, who I think is,
wildly underappreciated for his tech insights. He is an incredible thinker on the use of knowledge
and society, which obviously AI affects. So I would read that essay. That's an incredible
one to read right now. Yeah, it's so good. So if you're hungry and desiring of more of this
information, I highly recommend you go to Cosmos. I also think, I'm excited for the YouTube channel.
I also think they should follow you on X, where you've been out there more. That's where I first saw
your idea about entrepreneur nation versus employee nation.
Yeah.
And I know you're going to put more stuff out there.
So what's the X where people should follow in?
So it's M Brendan 1.
Yeah.
And we're going to work on your naming conventions.
I'll warn you.
Yeah.
My stuff is all original.
It's all very,
it's hot takes.
But it's like it can be a little bit low tam.
So we're working on,
on that.
But it's cool.
I mean,
I love it.
It's when I started engaging authentically on Twitter,
it like,
it actually like improved my,
It was a key part of my intellectual journey, actually.
So I really love it.
Yeah, it makes you synthesize really complex things into really simple things.
And you're rewarded by if it resonates with somebody directly by the feedback loop, which I actually think is really beneficial.
And often I think people try to go super highbrow with the things that you talk about like AI.
And you do a really good job of saying, well, yes, I can match you there.
We can go to that level.
But actually, bottoms up, we need more people to utilize this, not just the few on top.
who are already there.
So I hope you do more of that.
Brendan, it was such a pleasure to have you here.
Likewise.
I just psyched you out that we were going to be done, but I want to do my favorite part,
especially in this age of like humanness.
So at the end of the podcast, what we do is we give everybody a little postcard.
And on the postcard, we give you a few minutes to write to young Brendan.
So you pick the age, but something before you feel like you came into being a man,
a little message.
Like, what would you tell him today in the world that we stand in today?
It can be in the context of anything we talked about or anything else.
And then you're going to read it.
And it's going to be this, like, the idea is every time we do this, it's a reminder that, like, even people who are in positions of power who are on podcasts like this,
they have that young voice inside of them, too, that maybe didn't know.
Okay.
All right.
So right around the time when I shifted from kind of pure play tech to philosophy, my wife,
shifted into mental health counseling.
So this is her passion.
What she caused me to realize is that a key factor in my success was guilt.
It was this idea that smart is as smart does, my mom always used to say.
But it drove me every Saturday morning, every Sunday morning to like, if I got up,
I was like, I got to be working.
I got to be working.
And what I realized.
is that the parts of you that are ingrained that made you successful become really, really strong.
And then there's a little bit of an atrophy of the other side.
And so leisure reading that was never developed.
I couldn't do it.
I would panic.
To this day, I haven't meditated, actually, still ever.
So I had to develop that.
And I told myself that this smart is a smart does part of you is so dominant, cultivate, develop like a muscle, the crucial complement of leisure.
And then I drew a cross section of my brain.
It's showing.
Let me see it.
What's it look like?
It's got the guilt part here.
Oh, I like that.
But I should show it growing because it's a redemptive story.
Is it?
Yeah.
Are you getting better at leisure now?
Yeah.
I'm not perfect.
You know, but I think I've come to realize that it's a crucial part of, you know, a full human life.
I love that, Brendan.
Thank you.
Thank you.
I hope you were listening as closely as I are.
was to this episode because I think people will become billionaires this year, next year,
the next five years as one-person companies from AI. I think there will be billion-dollar,
one-person companies created from AI, from what I have seen firsthand. But maybe more than that,
I think we will be able to have employees for the first time ever, even when we don't have a
business in the form of AI. I think all of my employees will have employees going forward. I think
we can run our own mini-businesses inside of our companies. I actually, after listening to him,
some of the things that I took away the most were that he's pretty optimistic about AI, as long as we
keep doing the thing that we are doing together here, which is questioning everything. I call this
company contrarian thinking because I want us to think. I want us to hold up our brains and actually
analyze the world around us. And if we don't do that, I think AI takes over entirely. And AI is really
just letting other humans who build it
hijack your brain, tell you what to think,
tell you what to do, and take over your life
without ever having a war.
The war will have already been fought,
and it's with a keyboard, and we welcome the enemy into our home.
And so I don't think that AI is the enemy,
but I do think if you are not really thoughtful
of being the type of human that pushes back
on the things that are told to you at face value,
then you need to start becoming that.
And the way that you do it is by listening exactly like you are here today.
And then by questioning everything we say, just like you question everybody else.
And I know that you question us sometimes because only 53% of you are subscribed.
So you loved this podcast.
You got all the way to the end.
And then you're not subscribed?
What do you want?
You want the AI robots to become your overlords?
So if you guys want us to continue bringing you the best contentious,
we humanly can, to obsess over every guest you want, and to continue to ask the tough questions,
please subscribe to the channel. We want to get to know you and build a real decades-long relationship
with another human, by the way, no robots here. And I want you to know that you are a big deal
to me, and it is an honor for me to have you on this podcast with us today. So please make sure
to subscribe, and I'll see you next time. Hey, guys, if you've ever thought about buying a business,
we've built what I think is the best acquisitions in business buying community and education
curriculum in the world. If you've ever thought about wanting to buy or own a business or if you want to
add more businesses to the mix, it's called the contrarian community. And what this is, is the goal is
we give you the three things that the best business buyers use, your own advisory team, your own
investment committee and a deal team. We get together each week to review deals live and beat up
all the deals that you're currently looking at while you simultaneously learn the best way possible,
which is called modeling by seeing other people put together deals.
This is how private equity buys businesses.
This is how investment teams work.
And we're stealing the methods from Wall Street and giving them to you.
If this is interesting to you, go to click the link,
and you can actually talk to my team direct about if this is a fit or not.
We can help guide you.
The link is in the show description.
