Moonshots with Peter Diamandis - Aliens, AI Weapons, China & Global Conflict: Palmer Luckey Sounds the Alarm | EP #169
Episode Date: May 6, 2025In this episode, Palmer and Peter discuss the China AI race, Aliens, the probability of ASI, and more.  Recorded on May 1st, 2025 Views are my own thoughts; not Financial, Medical, or Legal Adv...ice. Palmer Luckey is an American entrepreneur best known for founding Oculus VR and inventing the Oculus Rift, which helped revive the virtual reality industry. He sold Oculus to Facebook in 2014 for $2 billion. After leaving Facebook, he founded Anduril Industries, a $14B defense tech company specializing in AI-powered military systems. Learn more about Exponential Mastery: https://bit.ly/exponentialmastery Learn more about Anduril: https://www.anduril.com/ ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/ Get 15% off OneSkin with the code PETER at  https://www.oneskin.co/ #oneskinpod _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Newsletter _____________ Connect With Peter: Twitter Instagram Youtube Moonshots Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Over the last two years there have been an unprecedented number of congressional
hearings about NHIs, non-human intelligence, and alien craft. What are
your thoughts? I've probably seen things that I can't talk about. It's hard for me
to have an open discussion about it. I think I can plainly say. So my name is
Palmer Lucky. I started Andrel because I wanted to work in the national security space for a variety of reasons.
PPT-03 just came in at a 135 IQ.
I'm not betting my company on super intelligence, but I do believe it will happen.
It's not US versus China. It's US and China versus the rogue actor.
China is not going to purposely build a tailored bio weapon that wipes out all the Jews,
for example. But at the same time, China's made real material threats and said they are going to
reunify with Taiwan by force if necessary within this generation. Now that's the moonshot, ladies
and gentlemen. So, Palmer, you've been on moonshots like four times in the last two years.
And as a friend, there's a bunch of questions I'd love to ask you that I truly, deeply want
to know the answer to.
Let's do it.
All right.
So, here's the first one.
Over the last two years, there have been an unprecedented number of
congressional hearings about NHIs, non-human intelligence and alien craft, by like the
highest level generals, admirals, air force. It's crazy. What are your thoughts?
I want to believe.
You want to believe.
It's, this is a tough topic for me because I've probably seen things that I can't talk about.
And if that was the case, it's hard for me to have an open discussion about it.
I think I can plainly say I have not seen evidence of anything that is conclusive, obvious.
I haven't seen recovered craft. I have not seen, you know, the programs that are analyzing alien wreckage.
But I have seen things that are not necessarily public that are very hard to explain.
What makes it difficult in the public eye is that there are dozens of examples of very
strange things that can be explained in one way or another. There's really only a very small handful
that even when you really dig deep,
there is no explanation for the combination
of human eyes on sensor data, the behavior, the activity,
the timing.
The brain makes a lot of things up.
The brain is very, very trickable, you know,
and it's not just people,
different animals see the world in different ways.
Our perception of reality is, for example,
constantly actually lagging behind what we perceive.
What to you feels instantaneous
is actually as far as a second in the past.
It's amazing.
I mean, like when you clap,
you feel like it's instantaneous.
In reality, your brain is basically filtering
to know that anything that it perceives before you
should be perceiving it is not real, and it filters it out. And
so, you are, it's really interesting, the conscious I
once wrote a sci fi speculative short story about an alien
species that it's that it's like that, but taken to the extreme.
What if their percept what if your perception was actually
ours behind what you observed, if you had something
that kind of reacts to things well enough, sees a snapshot,
reacts to it by predicting what's going to happen next, how
long would it take for a person to figure out that the person on
the other side or, you know, an alien spacecraft, for example,
is actually operating under completely different principles
of consciousness, awareness, perception, reality.
It's like having a conversation with someone on the moon and you've got a two and a half
second time delay.
But what if the alien was so smart that it was able to predict what you would say in
response to it and then vice versa and then back again such that it seemed instantaneous?
What would it look like for a being to exist where its consciousness doesn't let,
because we get along lagging by a second.
Well, what if the lag was a hundred times more?
Is that really conceptually impossible to imagine?
I don't think so.
Just makes for a boring conversation though.
Well, but maybe.
So in the story that I wrote,
people don't figure this out
until bad things start happening.
Basically, they don't understand that the perception of these alien beings is much—the
instantaneous perception is much slower, but they're so smart about reasoning forward
that a person can't actually tell that they're not responding to what you're doing.
They're responding to what they predicted you were going to do five steps ahead.
Now, once you understand that gimmick, you can now do things that take advantage of it.
You know, doing things that are completely unpredictable, that are outside of what they
would expect. And when you live outside of social norms, you can do things that are very
unpredictable. And so from time to time, I write these short stories just to entertain myself.
I've never published any of them, but getting back to the topic, I've not seen anything.
There's some weird stuff. There are at least a handful of examples that are very impossible to explain. And I
think we've talked about this in abundance a few years ago, but I suspect that in the
end it's going to come out to be something that's different than what we all expect.
So probably less likely that it's aliens from a nearby planet. I suspect it'll be something
like some natural phenomenon we have not yet begun to understand.
But you hope it would be super cool.
It is going to be something that is beyond our current understanding.
I think it is more likely, for example, I'm not saying that this is what it is.
It is more likely that some of these craft are somehow traveling through time
than coming from a
nearby galaxy.
If you kind of look at what's more possible, and by the way, I don't mean traveling through
time necessarily backwards.
People say, Palmer, how could they go forward?
Perhaps they're coming from the distant past.
There's a lot of ways to look at this.
I wonder, so what's interesting is- Wait, got to ask. What do you think it is?
I believe life is ubiquitous in the universe.
I truly believe it is.
Probabilistically, it seems likely.
Probabilistically, and I think even in one sense,
almost thermodynamically, I think
that life is the end result of a series of processes.
And where do you fall on dark forest theory?
The proud nail gets hammered.
Life doesn't make it.
I don't have an opinion.
I haven't figured out my opinion. I'm an agnostic on dark forest theory.
So the question I have is,
if in fact it proves out, I mean what I find fascinating
isn't the UFO sightings from the 40s to 80s and the blurry photographs.
It's all of the testimonies that have been had over and over again, congressional, by
seemingly extraordinarily credentialed individuals who have a lot to lose and very little to
gain in this regard.
So I'm curious.
Decorated war veterans. Yeah.
Politicians who their career is everything
and their credibility is all they have.
Yeah.
And so it's extraordinary.
And so the question is, what would
be the public reaction if, in fact, it plays out to be true?
And I'm, you know, I've always, you know,
you're in the warfare business.
And I've always thought the only thing that
could bring sort of unified peace to the planet
besides a massively dominant force
Yep
Which you know and it said it's an external threat is an external threat an external threat
It's like an asteroid coming towards us with you know, ten years of you know
It's a planet killer that we have ten years to organize a response to or an alien saying hey
we're gonna come and eat you.
And it brings all of our differences vaporized
in the process.
Yeah, I mean, I do think that that is probably the case.
Historically, it seems to bear out too.
It's not just that we're unified with the people
that we don't care about. Even bitter enemies or
ideological enemies can be unified by a common threat. I
mean, you look at the Japanese and the Germans during World
War Two, culturally, they couldn't have been more
different. The, the, ideologically, you know, the
Japanese were subhuman to the Germans.
And the hilarious thing is that the Japanese believed
exactly the same thing of the Aryans.
And yet they allied and worked together
and smuggled controlled materials,
controlled chemicals, back and forth,
things that they uniquely had
because they had a common enemy.
And I think that that could,
you asked how people would react.
I think if they were an enemy, I think we would unite.
I think,
assuming that they weren't necessarily an enemy,
maybe I'm crazy, but I feel like,
I often feel like people wouldn't respond
the way that you expect.
Like, I almost feel like culturally we're so inoculated.
Like you believe that life is proliferated in the universe. I think so does the average person. And I think that if we found out
that, you know, Alpha Centauri, there's some guys living over there who, you know, are kind of like
us, I think a lot of people would be like, wow, that's really interesting, really fascinating.
I'm not sure it would even be the top trending topic on Twitter by day three. I think you'd
have to get a day or two and then it would retreat in the background.
You can get back to housewives of Hollywood.
I mean, people are focused on the things
that are in front of them.
You know, can I get food on the table,
the price of gas, the price of eggs, raising kids.
I think that the existence of aliens
is probably gonna be as important
as the context of those aliens.
Are they coming to burn us all down? Okay, then that's gonna threaten my way of life. Are they just out there in the world?
I think it would prompt a lot of navel gazing from the media class, the academic class,
and certainly the religious institutions. Yeah.
Oh, I mean, I mean, forget about selecting a new pope. I want to see what the Catholic Church does
if intelligent life is proven to exist elsewhere. I suspect actually they would probably be one
of the faster moving entities to say,
you know, the Catholic Church has been pretty clear,
I'm not Catholic by any means,
but I do appreciate that at least
for the last couple of centuries, they've said,
look, anything new that comes to light
that violates our understanding before
is proof of God's plan further revealed to us
and needs to be incorporated into into the doctrine
And I I just I would love to see how they would deal with that. That'd be very very interesting
Yeah, for sure unprecedented than a pope, you know
a lot there's been a lot of popes not a lot of doctrinal changes on the order of a
new species of sentient beings and of course the the current folklore and it's all it can be said is our
all our microrotech
technology emanated from UFOs and then the question is if the UFOs are real, does China,
Russia, India, US all vying for advanced technology there?
I will take a stand there.
I don't think our current microelectronics technology came from Alien Rex.
I think that's one area where we deserve the credit.
We made it happen. We figured it out.
Gordon Moore worked hard for his...
I think that we really did make that happen from scratch.
And so, you know, semiconductors, microprocessors, I think we can take credit for that.
If there is technology that's been derived from alien wrecks,
I suspect it's more likely to be related to fission
or fusion or advanced metallurgical or ceramic compounds.
Though gravity shielding would be awfully convenient
for your vehicles.
Well, yes, yes, that is true.
But on the other hand, I have not seen any gravitic drives.
I've kept my eye out. I think it's one of those on the other hand, I have not seen any gravitic drives. I've kept my eye out.
I think it's one of those things where people say,
well, we're just holding it in reserve
for the right moment.
Looking at how the government operates,
it's just hard to believe that they're capable
of having made it through the last half century of conflict
without ever feeling like that moment was the right moment.
Maybe I'm wrong. And
I think also people understand you go to war with the tools you have, not the tools you
want. And so if you haven't started a program to implement gravitic drives in an aircraft,
look at how long the F-35 is taken to get across the line. The F-35 was conceived during
the Cold War. People think of it as a much more modern thing because of how long it was delayed.
Remember that the Cold War ended December 25th, 1992.
The F-35 program had already started.
And so if you were gonna get a gravitic drive
into a bunch of fighters out,
apparently it takes 30 years to make it happen.
So I don't really buy into this idea
that there's a secret vault of technology
that's gonna be busted out
The moment that the threat level gets high because reality suggests it'll take us decades to make use of it
Yeah, well at least for the traditional errors not defense contract. Well, not not here to enderil
Yeah, well any and and rules doing things differently, but I mean, yeah now the government's been what you know betting the companies like and roll would exist
I think that was a crazy bet eight years ago.
It's maybe a crazy bet today.
I think most of you know that the news media
is delivering negative news to us all the time
because we pay 10 times more attention to negative news
than positive news.
For me, the only news worthwhile that's true
and impacting humanity is the news of science and technology.
And that's what I pay attention to.
Every week I put out two blogs, one on AI and exponential tech and one on longevity.
If this is of interest to you and it's available totally for free, please join me.
Subscribe at diamandis.com slash subscribe.
That's diamandis.com slash subscribe.
All right, let's go back to the episode.
So there's a different alien race that's
landed on the planet and is emerging right now and that's the whole AI world. Sure. And so let's
jump into that. You know, I saw Eric Schmidt recently saying that AI is being underhyped.
That if you truly understood the power that we have today and what's going to emerge on the back of recursive,
you know, self-programming of AI models
that were in the midst of this intelligence explosion.
And it's about to get really crazy, really fast.
So obviously Lattice and everything you've built
has been a beautiful platform of AI.
How much are you thinking about digital super intelligence
to define that as orders of magnitude
more intelligent than human systems?
So Androl was an AI company
back when it wasn't cool to be an AI company.
I mean, the name of the company is Androl Industries.
The acronym is literally AI.
But back in 2017, AI was kind of like how VR used to be.
Oh, it's always in the future, never in the present.
It's the thing for the wacky, crazy people
to waste their lives on, not a serious doer
to build a company on top of.
And I knew that AI was imminent
because the smartest people that I knew
were telling me that and illustrating it
in ways that were very believable.
Showing how a bunch of schemes that had been
improbable for decades were clearly scalable.
One of those people was John Carmack, who was the seed.
I love John.
John is, he's one of the smartest people in the world.
Definitely the smartest I know.
Yeah.
I remember John, so years and years ago,
when the Vigil X Prize got won.
Everybody listening, just so they know who John is.
Yeah, please.
He basically invented 3D gaming,
also started a rocket company,
later became the CTO of Oculus.
He created Doom and Quake and basically the modern 3D game engine.
I mean, he's, and he deeply understands hardware and software.
Sorry, just because a lot of people might not know who John is.
Yeah, I know. He's amazing.
And he had one of our teams in the original Spaceflight X Prize, Armadillo Aerospace.
And they were doing vertical takeoff and landing rockets way before SpaceX.
I mean, like a decade before SpaceX was doing it.
We had this lunar lander challenges where you had to launch, hover, translate 100 meters
to a soft landing and come back.
And I remember back then it was like he had on the side of his rocket this thing, how
do you pronounce this?
N-V-I-D-I-N-V-I-D-I-N-V-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D-I-D video. And I know that sounds crazy. Facebook was considering it. Oculus was considering it. Oculus was part of Facebook.
So it would have been Facebook at the end of the day.
But I mean, you've got to remember, that sounds crazy,
but remember that when you go back to that point in time
where we were acquired and video was worth like $4 billion,
I mean, like, it's not that crazy.
Very, very affordable.
And remember, you don't have to buy the whole company.
They were publicly traded.
So you just need to take a dominating position.
You don't have to necessarily buy out every share.
And so, I mean, we're like,
you're looking at like a low single digit
million investment to have control of it.
Now people have often looked at that and said,
oh my God, imagine if we would have done that.
Imagine how, what a big deal that would have been.
My point to them is if we had bought Nvidia,
they never would have turned into what they are today.
Right?
They wouldn't have bet on,
on, on, on, it would have been focused more
on like AR, VR processing.
They wouldn't have focused on cluster computing.
They wouldn't have focused on, on, on crypto.
And then they wouldn't have gotten extremely lucky
in that their crypto architecture happened to be exactly
what you need to scale large language models.
They were extraordinary.
And we've talked about this, that, you know,
your kids playing video
games at home.
But in terms of super intelligence,
so John ended up leaving Oculus, half long after I was fired,
actually, because he wanted to work on AGI.
And he told me and now has said publicly
that even though he thinks it was relatively low probability
that he would crack the nut,
that the impact on humanity would be so fundamental
that even a 1% chance of succeeding
made it on a risk of kind of cost benefit
and weighted analysis way of looking at it, obviously the right thing
to spend his time doing.
And so that was one of the reasons I had such confidence
starting Anderol and saying, I'm gonna build a company.
Basically the whole premise of the company is, okay,
take for a given that AI is finally gonna work.
Take for a given that autonomy is here.
What would that mean for the military?
And then we've gone about building all the things
that assumed it was true.
That was really our early advantage.
Other companies were not running their programs
and their research and development programs
as if AI was a real thing.
And so a lot of these things we've broken into,
it's not that the people in the Air Force were dumb
or the people in the Navy were stupid.
It's that they were making decisions assuming that AI
wasn't going to be real.
And once it becomes real, well, that changes everything.
Yeah.
And there's a vast difference between an AI native startup and an old school company trying
to retrofit.
Trying to ram it in.
Yeah.
Right.
And there's also a difference between a...
Very different.
The company is premised on AI versus just helped by it.
And a founder led AI premise company.
I mean, the advantages that you have and Zuckerberg has,
Elon has as a visionary leader, able to say,
no, no, no, no, I know this is the way we used to do it.
We're changing it. We're going this way now because it's the right way to go.
Yep. No, no, no, no. I know this is the way we used to do it. We're changing it. We're going this way now because it's the right way to go.
Yep.
It's impossible for a large scale public company
in the defense industry to make any kind of shifts like that.
I mean, you know, Zuck had an AI research lab
of significant size back when all this stuff
was considered crazy.
So when they opened up their AI research lab
where they were doing integrated AI and robotics,
I mean, we're talking about like 2014, 2015,
they were doing this.
And a lot of people, including the public markets,
saw it as a folly.
They saw it as Mark working on this ridiculous thing,
burning money, totally a waste of time.
And what it really is, it's what you're saying.
These founder-led companies can make bets
that a hired executive would never make.
You would never have a hired CEO from the outside
who's also thinking about what his next job is gonna be.
He's not gonna say, you know what I think I'm gonna do?
I'm gonna burn billions of dollars on this technology
that everyone thinks is a total waste of time,
and I'm gonna be punished for it,
quarter after quarter after quarter.
And eventually someday it's going to come to bear
and everyone's gonna say I'm right
because none of those guys are usually even around
long enough to see the fruits of that labor.
And even if they were,
there's probably safer bets they could make.
And yeah, founder led companies can afford to do that.
They can afford to say, you know what?
I'm gonna do it anyway,
because this is my company and I care more about it than know what? I'm gonna do it anyway, because this is my company
and I care more about it than anybody
and I'm gonna do the right thing for it in the long run.
It's a powerful thing.
It is powerful, especially when the founder
is so technically literate and has,
his teams revere him, and I'm not saying
you ever said that about yourself, but your teams do, as they do for Zuck, as they do for Elon and others.
Well, you attract people who will. I think that's true because the reality, maybe revere is not quite the right word, but if they didn't believe in the company, they probably wouldn't have joined.
And if they didn't like what they see, they probably won't stay. And so you end up building, if you're a founder led company, you're going to attract people
to, to a certain, a certain extent, reflect the vision of
the founder, you, because you want to attract people who
believe in that vision. And then equally important repel people
who do not, do not like, yes, equally important. Well, you,
you think you, you saw that ad campaign we did don't work at
Anderil. You know, the, the whole point was, Hey, here it's,
it's, we work hard. This is a real job. You're going to be in
the shit. This is eight hour work hard, this is a real job, you're gonna be in the shit.
This is eight hour work weeks, guys.
And we have a lot of people coming in and saying,
like from the outside, oh, this is a bizarre campaign,
this seems like, like, why would you do a campaign
about how hard it is to work at Andrew?
And the point is, guys, this is going to attract
exactly the right type of person,
and most importantly, repel anyone
who wouldn't enjoy working here once they get here.
And by the way, our applications went up 3x
the week after that campaign,
and they're all exactly the type of people we want.
I believe that's beautiful.
So one of the challenges a company has as it matures
is deciding whether to go public or not.
And I don't wanna get into
whether Andrew is going public or not,
but eventually my guess is yes.
We actually do.
So we are going public.
We have to at some point.
So how do you win an F-35 scale program without doing it?
So let's talk about that.
So you've got Elon saying,
I will never take SpaceX public, right?
I don't wanna have shareholders telling me
whether I can spend money to go to Mars, right?
And I don't wanna disclose all of my secrets in a 10Q and so forth.
Then you've got folks like Bezos, you know, I've known Jeff for 40 plus, 45 years almost.
And, you know, Jeff famously says, don't invest in Amazon if you're looking for me to maximize near term returns or shareholder.
It's like, I'm going to build, build, build.
How do you balance the benefits of going public so you can enjoy these large contracts at the same time of the agility that you've survived and you've thrived in?
So I've never run a publicly traded company. So take this with a grain of salt.
It's as valuable as what you're paying for my advice, which is nothing. Now you have a CEO. We do. We do. So our CEO is
Brian Schiff. And you've done an amazing, he's an amazing individual. He is. And I think we're
totally agreed. Everyone's in alignment on this. When we become a public company, we have to keep
doing what we've done in the private markets. It's really no different than hiring. We need to attract people who believe in our vision
and repel people who don't believe in our vision.
A lot of people imagine that if Anderol goes public,
we'll become like other public defense companies
because the less risk, not willing to invest in the future,
paying out dividends rather than investing in R&D.
Maximizing quarterly returns.
But that doesn't have to necessarily be the case.
Even when you transition to the public markets,
you can take action through your communications,
through your filings, through your decisions,
that scare away people who want you to be
like a traditional company.
You want to attract investors who believe
in your vision of the world.
And if that's your whole investor base,
they're not going to force you to be something different.
I think Elon's other company, Tesla is actually probably the strongest example
here. Tesla has an extremely high price earnings ratio. Why?
Because their investors believe that they are going to win across the
board on a multi-decade timescale. They think they're going to win at robotics.
They're going to win at energy and I a multi-decade time scale. They think they're going to win at robotics. They're going to win at energy.
They will.
And I think they have a very good shot
at winning on all or most of those items.
And you could ask, well, wait, they're
a publicly traded company.
Why aren't they like one of the more traditional automotive
companies?
Why weren't they forced to be more like a traditional company?
And the answer is simple.
Because they've cultivated an investor base
that believes in what Tesla is,
and they've repelled everybody else.
Many of the people they've repelled
are now shorting Tesla because they don't believe in it.
I think it's the same way for us.
We're gonna need to attract people
who believe in what we are, repel everybody else,
and if we ever start getting enough of an investor base that is pressuring us to do
the wrong thing, I'm going to need to go in the press and say some crazy shit to scare
them all away because I don't want those guys voting at my quarterlies.
I don't want them picking my board members.
I want all the Anderl haters and people who want Anderl to be what a defense company used to mean, your software is the majority of your workforce
and the majority of your products you're developing.
Digital super intelligence.
Let's define that as, yeah, so very famously,
we saw AI around the world.
And I think that's a very important point.
I think that's a very important point.
And I think that's a very important point. Digital superintelligence, let's define that as, yeah, so very famously,
we saw AI reaching IQs just above human levels of 101.
This was Claude three,
and then GPT-03 just came in at a 135 IQ.
And so the prediction is that, well, you know, Elon's prediction when he was on our stage at the Abundance Summit a couple years ago is as smart as all humans combined by 2029 or 2030. So that's a vastly accelerated curve.
Yeah.
How do you, are you skating to that, where that curve's gonna be? How are you thinking about it?
I am,
am I skating to where that curve is gonna be?
You know, I'm actually probably running my company
a lot more pessimistically than that.
To bet that those most optimistic predictions
will come true is probably not a responsible way
for me to run my company.
Sure, but a billion fold, a billionfold, eight billionfold,
let's just say a millionfold smarter than a human.
I think we are operating under the assumption
that will happen.
And you could pick almost any point in even the last,
let's say, two years.
And people say, well, AI might be getting smarter,
but it'll never be able to do this.
And then within weeks or maybe months,
it's doing it. And they say, well, but this video has this problem. You see the man has
six fingers. And so that proves that he will never be able to replace a real illustrator.
And then of course, you know, you wait a couple weeks and all of a sudden that's no longer the
case. So I'm not going to be one of those people who bets that we're not going to get there.
At the same time, I have to,
I would say Andrew's thesis makes sense,
even if AI doesn't get smarter than a person.
You know, I needed to have fast reaction times.
I needed to be able to do things to think much faster.
So for example, processing what would have taken a person
a month to process.
If I can do as good as a person would have done in a month, but do it in a minute,
that's a superpower in and of itself for military operations.
So I don't actually need things to be super intelligent.
I just need them to be better than people at speed, at latency.
Also, you know, if I've got a truck and I need a truck to drive itself around,
I don't need to have 135 IQ.
100 IQ is more than enough to drive a truck sufficiently well,
especially if it's, you know, 100 IQ that's not distracted,
not sleep deprived, never going to be abusing substances.
Like that's actually pretty great, especially when you can duplicate it
100,000 times for free on like a trained truck driver.
So I'm not betting my company on super intelligence, but I do believe it will happen. And I have to imagine that super intelligence in the warfare game is just a small advantage.
It often is.
Could make a huge difference.
Well, so I often tell people, what would you rather do?
Would you have an airplane that is twice as fast
or an airplane that makes decisions that are twice as good?
Or put another way, would you rather have an airplane
that can carry twice as many weapons
or be twice as smart about which targets to use them on
and when?
Would you rather be able to predict the next five minutes
of combat better or would you rather be able to,
would you rather be able to have sensors to see more,
like would you rather predict the battlefield
or actually sense it?
And in most cases, it's the software advantage
that you'd rather have.
Like I, I would rather- Which by the way, scales much faster than the software advantage that you'd rather have. Yeah, sure. I would rather.
Which by the way scales much faster than the hardware
advantage ever will.
Well, it scales faster.
Every copy of software you duplicate
is free once you've invented it.
And it can often be applied to many hardware systems.
If I make one aircraft better by investing in that one airframe
design, it's not nearly as useful of an advantage
as a piece of software I can deploy
to 10 different kinds of aircraft.
And so, I mean, that's really the core of Andoril.
Our core product is Lattice, which is the AI engine
that powers everything we do.
The reason we've been able to pivot
into so many different industries
is because we invest so much in that AI platform
that runs all of our products.
So much of the world paints the-
I do have to tell, actually,
going with this, I haven't thought about this in a while,
but have we ever talked about
kind of the philosophical origins of lattice?
I don't know if we have.
No, I'm gonna guess Skynet.
I mean, Skynet is the fictional example everyone thinks,
but there was a French mathematician and philosopher,
Pierre-Simon Laplace.
I used his equations in school.
Best known for Laplace transforms.
Yes.
But he had this thought experiment
known as Laplace's Demon.
And it was a thought experiment around
whether free will exists or not.
And this was before anyone was talking about
simulation theory. We didn't even have computers. But he posited, well, to think about a free will
exists, suppose that there was a supernatural being, this demon that was so perceptive of the
world that it could perceive every particle of matter in the entire world and the energy
contained therein, the motion contained therein, in the whole universe he could perceive every particle of matter in the entire world and the energy contained therein,
the motion contained therein.
In the whole universe, he could perceive it all at once.
And also suppose that this being were so intelligent
that he could in an instant reason about the reactions
that will occur as they collide with each other
and physics occur and chemistry occurs.
And he could reason so on and so forth all the way until the end of time. Suppose that such a being could derive
an equation that describes the actions of every person in the universe until the end
of time. And his question was this, if that being can even exist theoretically, doesn't
that mean free will isn't real?
Is it everything deterministic,
just physics playing its way out?
And so the question was also,
are there things that change this idea?
Are there non-deterministic elements in our universe?
Are there supernatural effects?
Are there spiritual effects?
Are there things we cannot observe,
that nothing can observe?
Could it be that the act of observing it
in fact changes the outcome such that- Enter quantum.
This is long before quantum theory was in play, but he was asking these questions. Could it be that
such a being is not possible? And I think he posited that if such a being is even theoretically
possible, free will definitionally does not exist. And that if such a being is impossible, then at
least free will is a possibility.
And most people get in deep into the philosophical side
of this question.
When I became familiar with Laplace's demon
as a thought experiment, my first thought is,
who's gonna build Laplace's demon?
I mean, what would that look like?
What would it look like to build something
that is as close as you could get?
It perceives as much of the world as you can.
Omniscience.
Omniscience, and not just on the present.
What if you, this idea of gathering enough information
to be smart enough to reason about where it's going to lead,
it's the same as seeing the future
or even traveling into the future.
What if I could predict what my enemy was gonna do
10 seconds into the future with a high degree of certainty?
What if I could predict what he's going to do 10 seconds into the future with a high degree of certainty. What if I could predict what he's going to do
for the next week with reasonable certainty?
It won't be right every time,
but you spread across enough bets.
You take 10,000 guesses and 9,000 of them are right.
That's a superpower.
I mean, that would seem superhuman.
And so in terms of super intelligence,
that is actually what lattice is supposed to
eventually become by tying enough sensors together. You can build a model of the world where you can
react not to what the enemy is doing, but what they will be doing. And I think that that's the
type of capability where I'd rather be able to predict where my enemy is going to be and what
my best response is, than be able to have a jet that goes ever so slightly faster.
I mean, what if I can start going to where I need to be,
skate to the puck,
because I've predicted that's where I need to be.
I'd rather have that.
Yeah.
You know, in the commercial world,
I talk about this as a trillion sensor future,
where you can know anything you want
because the sensors are there.
You can predict a man's blazer color on Madison Avenue Avenue because you can ask your AI to look at the camera feeds
and so forth. And if you can know anything you want then what's
interesting is it's important to ask amazing questions. The questions you ask
are more important than what you know at that point. The world paints US versus China in the AI space.
I had a conversation with Eric Schmidt about this,
where the concern in my mind, and I think in Eric's as well,
I'm curious for yourself, is not US versus China,
it's US and China versus the rogue actor.
The individual out there who's using digital
super intelligence to code up the next virus or code up the next hack, whatever the case might be.
Sure. How do you think about that here?
I would probably take a bit of a counter position. I mean, look, I'm very worried about
rogue actors because rogue actors don't necessarily act as rational actors. Nation states
typically exist on a rational basis. And you could take kind of extremist theocracy like Iran,
and you could argue, well, they're, they're not acting rationally. But those are few and far
between. In general, nation states follow game theory, they act in their rational self-interest,
they don't want to destroy themselves in the
process of destroying you.
Whereas when you bring that down to the level of an individual
or a small group, you can have people who believe they win by
losing, they could think that them dying is the victory, they
could believe that bringing out an apocalypse is their destiny.
And so I'm terrified of, for example, tailored bioweapons built by rogue
groups. The idea though, that it's that it's the US and China versus these rogue groups,
I'm not so sure. I think that China on its own poses its own unique type of threat. It's
of course, it doesn't terrify me as much. China is not going to purposely build a tailored bio weapon
that wipes out all the Jews, for example.
I don't worry about China trying to do that.
But at the same time, China's made real material threats
and said they are going to reunify with Taiwan
by force if necessary within this generation.
I watched your most excellent TED talk in the war gaming.
And it's chilling to see how that plays out.
One, it's not just Taiwan.
China has been, China, I mean, in living memory,
China tried to invade Vietnam.
A lot of people who want, they want to pretend,
they want to whitewash China,
largely because they're often working with China.
So they have to carry water for them.
And they say, well, China, Taiwan is a special case.
There's this long history.
I say, okay, well, what about when in living memory
they invaded Vietnam?
What about the fact that they are currently occupying
huge swaths of territory in the Philippines?
What about where they're illegally building
artificial islands in the sovereign territory
of other nations?
What about the fact that you now have Xi
going to conferences and saying that they think
Okinawa Japan is actually a territorial
vassal holding of China?
He's pretended to have this awakening.
He says, well, you know, the Okinawan people
used to be a tributary state to China.
They gave us tribute and we failed them
by failing to protect them from being taken over by the Japanese imperialists. He's laying the
groundwork for- Revisionist, revisionist history. Well, yeah, he's willing revisionist history
because he knows he can't motivate a bunch of young Chinese guys to go and take over a territory
they have nothing to do with and
absolutely no way of convincing anyone that is theirs. He has to tell them a story where this is
actually part of the great Chinese empire. And so that is actually where I think China is the
their own unique threat. They are willing to reinvent history with their own population,
much like how Russia has with Ukraine to justify death and violence at mass scale.
I think they want to take elements of Japan.
They want to take elements of the Philippines.
They want Korea, they want Vietnam,
and certainly they want Taiwan.
Imagine what a world looks like
where China achieves even half of that.
And by the way, a rogue actor's not gonna do that.
That's what makes it such a unique threat.
A rogue actor might make a virus,
but they're not gonna take over a democratic nation
and seize control of the semiconductor supply.
Having said all that,
what you said about being rational actors
and being able to take actions politically and militarily
to prevent that from occurring,
gives you a game plan.
Oh, 100%.
But the question I have, I mean, is,
are androids systems, in the notice of,
in the idea that lattice is giving us a omniscient level
of knowledge.
You know, to prevent rogue actors,
it is I think gonna be critical to have enough data
of what's out there and being able to track it.
I think that you, do you imagine that
as part of your future?
I think it's a part, honestly,
I'd probably have to give more credit
to companies like Palantir.
Like I think they're building more of these non,
not quite at the tactical edge, real-time tools that allow you to find these bad actors.
They've been involved in apprehending and killing a lot of really dangerous people.
Terror cells, multi-time violent criminals.
I think Palantir and companies like them are actually probably doing that. Like if I had to split it, I'd say a company like Anderil is much more relevant to a more traditional hard power deterrence theory that stops a rational actor like China, less so a rogue nation state group.
Everybody, I hope you're enjoying this episode. You know, earlier this year, I was joined on stage at the 2025 Abundance Summit by a
rock star group of entrepreneurs, CEOs, investors, focused on the vision and future for AGI,
humanoid robotics, longevity, blockchain, basically the next trillion dollar opportunities.
If you weren't at the Abundance Summit, it's not too late. You can watch the entire Abundance Summit online by going to exponentialmastery.com.
That's exponentialmastery.com.
Let me flip to the positive side of ASI, of advanced super intelligence.
There's a lot of breakthroughs that are on the precipice, right? We just saw the first Nobel Prize given to Demis Asabas
and John Jumper for Alpha Fold.
What are you hoping for out of sort of advances
in physics and math and science?
Medicine.
Medicine.
So I think there are so many elements of low-hanging fruit
that we have not been able to seize,
partly because of the regulatory climate, but also the cost of developing and testing new drugs
is so high, not just drugs, but new therapies. Therapies that require continuous intervention
and monitoring, we've not had the resources to try everything. You have to pick very, very tightly
what you're going to do. And even then it mostly doesn't work. I'm in the business, I know.
And so automation at scale of those,
I mean, what if instead of one lab, you could run 10,000?
What if instead of running 10,000,
you run a million simulations?
I, so medicine, I'm very optimistic.
I think energy is another area where right now,
like I think that AI assisted design of fission
and fusion energy generating systems
is going to be a massive, massive way,
change in the way that we use energy.
Energy is such a huge part of our way of life.
It drives food cost, it drives the cost of material, it of our way of life. It drives food cost.
It drives the cost of material.
It drives the cost of shipping.
The GDP of a country.
That's right.
And there's really no examples of high GDP countries
that do not consume lots of energy.
Not necessarily produce.
There are ones who buy their energy from elsewhere.
Consume is right.
But it takes energy.
It takes energy to build the future.
And so I'm very excited there.
Do you ever see, Anderl, getting into the adjacency of the energy space or the biotech space?
The adjacency?
I think maybe we're already a little bit adjacent, but I think we're really focused on our mission of trying to modernize military capability.
And doing it well. So let's get it. Like with energy, we partner with a lot of these companies. on our mission of trying to modernize military capability and use of force.
And doing it well.
So let's get it.
Like with energy, we partner with a lot of these companies.
So there's a lot of companies that are doing interesting
things in the nuclear power space.
We're partnering with them.
I don't see any reason for me to try to compete with them.
I wanna be a customer of theirs.
And I wanna use the DoD as an early customer
that can help accelerate the deployment of these new ideas
in how to split atoms and how to fuse atoms.
I want to talk about the speed of defense system innovation.
Yeah.
And just a few metrics for comparison here, looking at the glorious days of World War II and manufacturing and innovation. So the first, you know, Kelly Johnson brings on the first US
jet in 143 days from clean sheet of paper to a jet flying.
Yep.
The Liberty ships got cut from 230 days per production to
four and a half days.
The P-51 Mustang fighter goes from concept to flight in
102 days.
And one of the references you had was the B-24
Libertor bomber, one per every 63 minutes by four.
Isn't that incredible?
That's insane.
And like these were not small planes.
No.
I mean, these were flying fortresses.
What happened?
I mean, yes, it was a war footing,
but I studied Kelly Johnson and Lockheed Skunk Works,
and his philosophy of, I mean, if I remember correctly,
what he did was he had a single blueprint
in the center of the workspace,
and any of his engineers could go and make a change on it,
but they had to sign their name to it,
because they knew if they made a mistake,
it was someone's life.
Yep.
And the rate of iteration was so rapid.
What happened that killed that level of innovation iteration?
Look, you can blame a lot of things.
I think actually it's probably the end of the Cold War.
The end of the Cold War was what I'm not saying that we should have continued the Cold War.
It's just that that is what caused the change to happen.
The United States government came in
and you may be familiar with the Last Supper.
They brought together the heads for one dinner
of all of the major defense companies.
And they said, there will be consolidation.
Half of you should not exist by the end of next year.
Like consolidate, consolidate, consolidate.
The party is over. We are going to decide who the winners are.
Musical chairs.
And if you don't get with this program,
then you're out and you're done.
And it was very much a top-down driven thing.
So you ask you, why did the innovation go away?
Why did the speed go away?
It was because there was no longer a drive to move quickly.
There was no longer a drive to move quickly.
There was no longer a government directive
to move rapidly against threats.
And we moved into a peacetime posture
that was willing to accept a high level of inefficiency
because they felt like that was okay.
And I think it went worse than they expected.
I think they expected some level of inefficiency.
They did not expect that reduced industry
to then capture the political side
and maintain that inefficiency for decades.
And so it was one of those,
it was one of those kind of okay ideas
that didn't turn out so well.
And by the way, the argument that the people
who architected the Last Supper would say is that we made the
right decision. They'd say, look, we reaped a peace dividend.
Look at what we did through the 80s and the 90s in the early
2000s. Look at the economic growth in the United States.
It's hard to argue with the results. They argue that we did
reap a peace dividend on the back of this. But we can still
recognize it was a problem for our military
prowess. We had that huge explosion of economic development and technological investment elsewhere
to the detriment of our military. And one last thing I'll say is a lot of the smart people left.
A lot of those people who helped build things like GPS for the military, they didn't stay in
government labs. They went into the private sector. And now we have a proliferation of things that rely on GPS.
And look at the microprocessor industry.
It was the same thing.
These same people who built microprocessors for the DoD,
they instead, we had the explosion of Silicon Valley.
And those people, that's where the smart people went.
So that was definitely also to the detriment
of organizations like Skunk Works.
And that's been your philosophy to pull that talent out.
Pull it back is the way I look at it.
Exactly.
You know, we're just bringing them back to where they were.
There's a long tradition of the smartest people
in the country wanting to work on national security problems.
And there was a time where that wasn't the case.
I think that that's finally reversing.
I want to dive into your design philosophy here at Enderal. Sure.
You know, I've spent a lot of time with Elon talking about his design philosophy at SpaceX.
It's like, and it seems there'd be a very similar parallel. It's like simplify parts count, simplify designs, but not
overly too simple. So how do you think, you know, how do you think of your design
philosophy in the systems that you're building? Boy, this is a huge question. If
there is an overarching. Yeah, well I'm trying to think what are some of the
common threads? I mean one of the common threads that I think is is different about
Andrew than people would expect is that we generally do not vertically integrate.
SpaceX, Tesla and others have really fetishized vertical integration.
And it makes sense for some of them.
It really does.
But when I get pressure from usually people who don't know my business that well,
they say,
oh, they kind of assume like, oh, when are you going to bring this all in house?
I assume so as well.
Well, the thing you have to remember is that when you are building, let's say, space launch systems,
your customer base is pretty well known months or really years in advance.
You know what your schedule is, you know how many rockets you're going to need.
You can plan all of this very predictably.
That's not necessarily the case for weapons production.
You need to be ready for shit to hit the fan and to 10x or 100x your production.
Got it.
I'm actually pretty irresponsible.
Ramping up and ramping down.
If I build a vertically integrated capability where I build every wiring harness, I don't
work with any partners on my fasteners, on my composites, on my casting. If I can only do that in-house, what happens when the DoD
suddenly needs a hundred times more of that system every single day? Well, that means I have to build
a hundred times more factory space. I need to hire a hundred times more people. How the hell
am I going to do that? What's much more responsible is for my engineers
to design a part that can be made by any machine shop
in the country.
And sourced everywhere, yeah.
Yeah, to pick up, to pick an adhesive
where there's 10 suppliers in the country, not just one.
And I, you're certainly not something
we only make ourselves.
And that means that if I need to ramp up,
I can multi-source these things.
I can ramp, I can outsource it to lots of other places,
or I can do what we did during World War II.
I go to the industrial capacity that exists for,
let's say, American automotive industry,
or the American commercial aviation space,
and you take over, and you say,
hey, good news, our submarine can be manufactured
by the same robot arms, the same plasma cutters,
and the same assembly lines and people
that were cranking out cars yesterday.
That's how you build a resilient defense infrastructure.
And I mentioned this in my Ted talk,
we have to design for mass production
using existing infrastructure.
You can't assume that you're going to have the time
to build an alien dreadnought to build your thing.
And that's again, you have the Tesla with the Model 3 wanted, they wanted to build this alien dreadnought to build your thing. And that's again, Tesla with the Model 3,
they wanted to build this hyper optimized capability.
But Elon's never gonna have one year
where he needs a hundred times more Model 3s
and then the next year, a hundred times less.
It's just not quite like that.
Makes sense.
Going beyond that, and I completely, it's crystal now.
Going beyond that, and I completely, it's crystal now.
In terms of how rapidly you iterate a product,
how you focus on parts count, materials and so forth,
is there other design elements that you, so you have as a basis for the company
that you learned perhaps when you were at in Oculus?
Well, I mean, there's so much that I've brought from my Oculus days.
Because I mean, that's what makes this company different.
Well, what's interesting is so many of these things are, I almost don't want to, if I belabor
them, it sounds like I'm a, it sounds like I think I'm a genius for
doing things the way that are just already done everywhere. I
mean, what we're doing is taking the same approaches to design,
design, review velocity of manufacturing from the, you
know, like the consumer electronics world and just
bringing it to defense. I mean, you know, with Oculus, we were
launching a new a new product every single year.
We had to manufacture millions of virtual reality systems.
And it's a totally different mentality than you see
in the defense space.
And so I'd say the main thing we brought here
is just do it like that.
Just do it the way that you do it in industries
where you have to move fast, where you can't afford to.
Like imagine if the iPhone was delayed
by four years. Like iPhones get delayed from time to time, right? You've seen this happen,
but it's usually, oh, we missed by a month. Well, but, but you know, we're, we're, we're,
they'll be over here soon or manufacturing was behind. And so they couldn't send it over on a
boat. They had to air freight it and they lost a little bit of money. Have you ever heard of an
iPhone being delayed by four years? How about 20 years of money. Have you ever heard of an iPhone being delayed by four years?
How about 20 years, right?
Have you ever heard of it?
Because it's just, it's unthinkable.
And so a lot of what I do is just doing things the way
that they're done in industries that aren't subsidized
by taxpayer dollars that can't afford to fail.
When you skin your knees when you fall,
you're a lot more careful to not trip.
And I think that that's really what has helped Anderil
in an overarching way.
We hire people from consumer electronics,
from the automotive industry,
from the maritime industry,
who are used to working in those kinds of conditions.
Do you ever expect the tech you develop in Anderil
to go back into the consumer space?
Not really.
Maybe it's even a breach of fiduciary duty,
but I just don't have a big interest in it.
I started this company to fix national security.
And early in our company's history,
we had the opportunity to do quite a bit of commercial work
that I think would have actually grown faster than our
DOD work. And that would have been a problem. Imagine a world where Andrel has a product
line where half of the team is dedicated to military and half is dedicated to, let's say,
commercial like oil and gas security or critical infrastructure security. And imagine a world
where the commercial side
is growing three times as fast.
What investor is gonna allow me to continue
to spend half the team on the thing growing
at a third of the rate?
I was terrified early on that that could become a reality.
It was actually similar to our border security work.
I was worried that that part of the business
would put us in a position where we weren't able to
invest in the military side. And so there were times where we said, you know what,
we think we could make money there. That is not our mission. We are going to vote. We need to say,
laser focused on our mission. That's how we're going to get to where we want to be, which is
being the next generation defense product company that really our first page of our first pitch
deck said, and roll will save taxpayers hundreds of billions of dollars
a year by making tens of billions.
I love that line.
Love that line.
And that was the mission.
So will it come to consumers?
I don't know.
And I'll finish off this bit by also noting,
anything that we sell to consumers is at the end of the day
going to end up in the hands of our adversaries.
People have asked me over and over again,
Palmer, you're building Eagle Eye,
this new integrated vision augmentation system
that's giving soldiers superhuman thermal vision,
night vision, augmented views of the world.
When are you gonna build a consumer version of that?
I would love to,
except it will end up on the heads of Chinese commandos. And they'll say,
oh, Palmer, there's export restrictions. Yeah. But if you have, if you're selling something to
civilians, eventually you will sell to a trader and that trader will get that gear into the hands
of your enemies. And so, you know, the Russian special forces, they're not wearing Russian gear.
They're wearing American night vision, American helmets,
American armor.
They're using the best.
And that's because they prioritize getting these things
smuggled out of the United States and into Russia.
And so you sure, maybe you can stop it from being on
every Russian soldier or every Chinese soldier.
But I mean, how do you think I would feel if I built
advanced capabilities that we sold to civilians
and then in an invasion of Taiwan scenario,
that's what's a bunch of Chinese commandos drop out
of helicopters, kill all the top political leadership
of Taiwan using and real gear.
I mean, that would be the worst reversal of intent in my life that I can imagine in terms of a tent versus effect.
So that is my biggest problem with selling back to civilians.
I would only sell tech that I don't worry about getting in the hands of the background.
It makes a lot of sense.
And that's not what you're passionate about.
If I figured out how to do...
We're not doing this.
But if I figured out how to do, let's say, better biological defense, like I've long been interested in long incubation antibiotics. So
things like antibiotics that are encapsulated, live in your body for long periods of time,
and are only released when you have some biological trigger that causes them to be released and become
active. Or you know, like biological antibiotics, same idea. Sort of a loitering defense system.
A loitering defense system.
A loitering defense system,
but one that is only active in the bloodstream
when there's a threat.
Because if you just have it all the time,
like if you just loaded people up
with antibiotics all the time,
you would create superbugs
because they would continuously be active in people.
So like suppose I figured out how to do that
and there was crossover to the civilian side,
that I would be absolutely a fan of, but I would have, I'd have to make sure that I'm not
inadvertently giving a tool of great power to an adversary.
I want to jump five years out.
It's 2030.
What does warfare look like in 2030?
You've got AI far more advanced, humanoid robotics, and I know your position on humanoid
robotics, but the ability to enhance super soldiers takes on a brand new meaning.
Yep. You know drones have gone from zero to infinity in record speed, it's
extraordinary. What you know, what are you thinking? I hate to be a cynic here, Peter, but I actually think warfare in 2030 is going to look more or less the same as it does today, with a few very small exceptions where things are breakthrough capabilities getting in.
I said earlier, you go to war with the tools you have, not the tools you want. The reality is the vast bulk of our arsenal was built a decade or two or three ago. And so even as
companies like Androl move very, very quickly, like we're trying
to build things that are relevant to a fight with a great
power, whether it's Iran or Russia, or particularly China.
But but even if we move at breakneck speed as fast as we can, we're going to end up being 1% of the fight, 2% of the
fight, right? I mean, like, we can try our very best. It's
going to take years and years to replace these legacy
capabilities with with new things. So I think what will
the battlefield look like, you're going to have a weird
anachronistic anachronistic mash of things that were built in the Reagan era, like our
tracked vehicles built in the Reagan era, operated by humanoid
robotics that just rolled off the line a few weeks ago, but
only like only like one column of them and all the rest are
going to be crewed by people, you're gonna things like AI
fighter jets flying alongside aircraft that were built under Bill Clinton, and they're going to be flying
together in formation. And unfortunately, there's probably
going to be a lot more manned aircraft. And the AI aircraft
are going to be a tip of the spear value a valuable component
they know they'll be the tip of the spear making first contact.
And they're probably all going to be blown up. And we're going
to say, shit, I wish we would have been building those for another couple of years. It's just 2030. I
mean, it's close. It's just so little time to build it, deploy it, and then train people
on it. Remember, you can't just deliver these things day one. People have to train for years
to become proficient in something. Imagine if you showed up with a new alien weapon system,
pulled straight out of the Roswell wreck today.
And you handed it to a soldier and said,
you have to go to war with this tomorrow.
That won't work.
You need to develop tactics.
You need to develop doctrine.
You need to have him train with his squad
for years, potentially.
Let's take it slightly different.
Let's talk about the 0.01%.
Let's talk about the elite Navy SEAL team or equivalent out
there that will have the most advanced technology and what do they look like?
You're going to see lower fatality rates. You're going to see people who are acting
as omniscient technomancers who are kind of acting as a central hub.
And AI surround, they know everything going. They know where the good guys are, they know where the bad guys are.
I think to a certain extent,
I think the future of warfare is gonna look a lot more
like chess than dodgeball.
If you understand what's happening
and you know exactly what you're up against,
where it is, when it is,
you can kind of know when you can win
and also know when you need to retreat.
You don't necessarily get to the point where you,
you know, win or lose the battle of Midway.
You know well ahead of winning or losing
what the likely outcome is.
And that drives probably better decisions.
I think you're gonna see a lot less casualties,
a lot less fatalities.
You're not going to allow yourself to, you know,
wheel your way into a scenario
where everyone gets wiped out.
And there's good and bad there.
I mean, when you give people better visibility
into what's gonna happen, imagine this.
Imagine a world where we get into a fight
that we can't really afford to lose.
And then we find out that to stay in that fight, we're going to have to send 50,000 sailors to the
bottom of the sea. I don't think the United States has the political will to do that.
We just don't, especially knowing that it will happen. And so it's a double-edged sword.
But I think in general,
I'm on the side of having the information to make that decision. And that I mean, it's going to make
decisions a lot harder for these guys, because right now, there's a lot of, I guess I'll end
with this. In current warfare, fog of war allows for enough indeterminism that someone can make
hard decisions without really knowing
what the impact would be. You believe, hey, this might work. Everyone might be fine. It
is interesting to ponder what happens when that uncertainty is removed. What happens
when you order someone to do something, you're no longer sending them into a non-determinant
liminal space. It's like, oh, well, they might live, they might not.
What happens when you know that they are,
with a high degree of certainty, going to die?
That will be a change in the nature of warfare
at a very high level.
Now, of course, the flip side is, like I said,
I think there'll be lower casualty rates,
better decisions will be made,
but it's going to make for a very hard
set of ethical quandaries.
But I don't think anyone, the flip side is I don't think anyone would argue that it's better to not
know. I don't think you'd find anybody saying it's better to not have that information in your
decision making process. So this Navy SEAL has, is omniscient, they've got enhanced imagery,
enhanced knowledge. Probably a hundred to one ratio of autonomous systems to men.
You know, every person who's going to be out there
is going to be working in a highly networked fashion
with a hundred autonoms.
So they're commanding drones and robots
and basically they're a critical extension.
Some will be commanding
and I think a lot of them are going to be
just autonomously doing their jobs.
You know, suppose that you have that Navy SEAL,
he might be aided continuously by 10 drones and I think a lot of them are gonna be just autonomously doing their jobs. You know, suppose that you have that Navy SEAL,
he might be aided continuously by 10 drones that are sensing the world around him,
looking for things that are a threat.
He's not so much commanding them as consuming the information that comes in.
And he's not watching 10 drone feeds.
He's just seeing in his augmented view of the world where those threats are.
And as things become a critical threat, the system is able to highlight that.
He doesn't have to look at 10 drone feeds and say,
huh, that guy's running.
I think he might be going over there.
The system is going to say, hey, this is the top threat.
It's the only thing that might kill you in the next minute.
You need to deal with this.
What do you want me to do?
So it's going to be a little different than he won't be
commanding the drone so much as them feeding him a view of the world.
And I, it's a, I act like this is the future. But of course,
this is what we're doing with our customers right now. I'm
like, people are doing these things in exercises, and in
small level conflicts all over the world right now. It's just
going to be a different thing when it's
taken a time to dream five years out beyond?
So you're building with the technology.
Five years out, I know exactly what I'm doing.
Five years is easy.
The things that are going to be relevant five years out,
we're starting to build them today.
You know, we just started construction on a $900 million factory in Columbus, Ohio
to build our autonomous fighter jets.
Those are going to be in combat before 2030.
So 2030, easy, easy for me.
I know exactly what we're gonna be selling.
So the question is, what are you starting to design
and build in 2030?
Yeah, that's the interesting one.
I mean- It's actually hardly anything.
In general, Andrel is very focused on building weapons for that kind of immediate near term.
It's leaked out through the press that we have certain teams working under a mandate called
China 27, which is if the feature you're building or the capability you're working on is not going to be ready
for a fight with China before the end of 2027,
you can't be working on it.
You need to find something that is relevant to that.
I don't wanna say that I'm not even thinking
about 2030 and beyond.
It's just, I'd probably say I dedicate 1% of my time.
Like I'll tell you what one thing I think,
I think you're gonna see subterranean warfare
become a much bigger part of the future.
Really?
Oh, hunt, I believe it's the next major war fighting domain.
I've said this many times and everyone thinks that I'm not.
What is that, drilling machines?
What does that look like?
So, yeah, more or less.
I mean, have you seen the movie The Core? Oh my God.
It's about the guy.
How long ago was it?
2006, I think.
Yeah.
It's about a group of guys who have to drill to the center of the Earth to use nuclear
bombs to restart the Earth's core spinning to protect us from cosmic rays.
Not a scientifically sound movie, but something like that.
You know, the United States and the Cold War, sorry, the United States and the Soviet Union
during the Cold War, both had subterrene programs, building
vehicles that moved through the crust of the earth, just like a
submarine would move through the ocean. And the Soviets
actually built a prototype and then lost it in the crust of the
earth. So it worked that well, they worked that it just went
off and they lost track of it melting through the crust.
I think that that's gonna become a very powerful part
of the future of warfare.
And I'm not talking about just tunnels or bunkers.
I mean, using the crust of the earth
as a fully three-dimensional battle space
that you will be moving supplies through.
You'll be doing electronic warfare,
kinetic warfare, psychological warfare, high-end logistics.
And that I don't think is relevant to China.
The technology is just not quite there.
I can't really build things at scale that are relevant by then.
That's one of maybe the few things I think past the 2030 timeline.
I think it's going to become a huge deal.
And at some point, the same way you see a Space Force,
I think it's very likely you'll see some kind of subterranor
corps.
I don't know exactly what that's going to look like.
But right now, the people who work on sub-T,
it's like a group in the Army whose job
is to deal with bunkers and tunnels.
I think it'll become a large enough part of warfare
that you're going to need a dedicated group that
focuses on the unique challenges of the subterranean domain. Amazing. I saw you a video posted on Pulsar L recently.
That was a bit of magic. It felt, it looked like a bunch of mosquitoes flying out,
dropping out of the sky. Well, I mean, that was, it's so funny. I don't know if you saw that.
Would you describe that for folks? Well, yeah, we just launched this video that shows Pulsar L.
It's basically a thing the size of a small cooler
and you can carry it in the back of a truck.
And it is an AI powered electronic warfare system
capable of jamming, spoofing, hacking,
targeted cyber effects, general cyber effects,
doing things that make the motors of drones
want to stop working, makes their navigation
not do what it's supposed to do. It's things that don the motors of drones want to stop working, makes their navigation not
do what it's supposed to do.
It's things that don't just work against remotely piloted drones.
It even works against autonomous attack drones.
It looks like an EMP being triggered and everything just falls down.
We released this video where, and by the way, that was a real test event.
So we had 25 autonomous attack drones and then they're flying towards the target,
and you turn on Pulse RL,
and they all fall out of the sky, fall to the ground.
This is a real capability.
We've been selling it to real military customers.
They're using it in combat right now,
and we finally were able to start showing it publicly.
It's so funny, because we released this video,
and I don't know if you saw my tweet about this,
but people were all saying, there's no way this is real. This is all totally fake. It's so funny because we released this video and I don't know if you saw my tweet about this but people were all saying there's no way this is real. This is all totally
fake. It's all CG. Andro wishes that it was this easy. What they don't understand and
don't see is that we've been investing in electronic warfare at Andro for the last five
years that this is a culmination of all of that work that this is a real capability.
And in fact, the video is literally an actual live test event. So I actually tweeted about it. I said, okay, fine.
We'll release all of the behind the scenes footage.
Like we'll just take all the video footage that we showed actually of it working.
Now we're not going to be able to talk about, you know, the,
the specifics of exactly the, the, you know,
the way that we're being clever with the electrons,
because that stuff falls into the classified domain.
But I will note too, things like Pulse RL,
they're not the solution because it's possible
to make a drone that can survive that type of attack.
They're a very useful part as a layered approach, right?
You need to have directed energy, you need to have EW,
you need to have kinetics,
you need all these things working together.
And it's very hard to make a drone
that makes it pass all of those things. very hard to make a drone that makes it pass all of those things.
Very hard to make a drone that can survive all of the ways that
Anderle has for stopping a drone.
Yeah.
It looked like the kind of device that I'd want in every Jeep on the war,
on the war battlefield.
I mean, every Jeep and I would love to see it at every airport.
I'd love to see it at every sports stadium.
The biggest obstacle is actually regulatory wise.
It's, Pulsar L is completely illegal in the United States. I'd love to see it at every sports stadium. The biggest obstacle is actually regulatory wise.
Pulsar L is completely illegal in the United States for non-military use.
There's nobody in the United States who's allowed to use something like Pulsar L.
The only guy who's allowed to push that button is someone with very special
authorizations via the military. And I think that's going to change. I've been spending years now talking
with members of Congress who understand
we can't afford to have our airports shut down by drones.
We can't afford to have our military bases surveilled
by drones.
I suspect inevitably there will be someone
who commits a large scale terror attack or series of terror
attacks using drones.
And it's cheap enough to get ahead of these threats
that we should at least try.
Everybody, I wanna take a short break from our episode
to talk about a company that's very important to me
and could actually save your life
or the life of someone that you love.
Company is called Fountain Life.
And it's a company I started years ago with Tony Robbins
and a group of very talented physicians.
Most of us don't actually know what's going on inside our body.
We're all optimists.
Until that day when you have a pain in your side, you go to the physician in the emergency
room and they say, listen, I'm sorry to tell you this, but you have this stage three or
four going on.
It didn't start that morning.
It probably was a problem that's been going on
for some time, but because we never look,
we don't find out.
So what we built at Fountain Life
was the world's most advanced diagnostic centers.
We have four across the US today,
and we're building 20 around the world.
These centers give you a full body MRI,
a brain, a brain vasculature, an AI enabled coronary
CT looking for soft plaque, dexa scan, a grail blood cancer test, a full executive blood
workup.
It's the most advanced workup you'll ever receive.
150 gigabytes of data that then go to our AIs and our physicians to find any disease at the very beginning
when it's solvable.
You're going to find out eventually.
You might as well find out when you can take action.
Fountain Life also has an entire side of therapeutics.
We look around the world for the most advanced therapeutics that can add 10, 20 healthy years
to your life and we provide them to you at our centers.
So if this is of interest to you, please go and check it out.
Go to fountainlife.com backslash Peter.
When Tony and I wrote our New York Times bestseller,
Life Force, we had 30,000 people reached out to us
for Fountain Life memberships.
If you go to fountainlife.com backslash Peter,
we'll put you to the top of the list.
Really, it's something that is, for me,
one of the most important things I offer my entire family,
the CEOs of my companies, my friends,
it's a chance to really add decades onto our healthy lifespans.
Go to fountainlife.com backslash Peter,
it's one of the most important things I can offer to you
as one of my listeners.
All right, let's go back to our episode.
You mentioned recently that we need to look at the ethics
of using AI and warfare on a case by case basis.
Absolutely.
The examples you gave were compelling. And I, the examples you gave were compelling and I agree with you and
I'd like to scratch that little bit. Where does that case-by-case ethical
review happen? Do you guys, do you have that kind of conversation inside of
Anderil? Is this something that's happening with your DoD customers? Sure.
How do you think about this?
The good news is that the DOD actually already has
these processes in place and they have for decades.
The reason that so many people are freaking out
about autonomous weapons is because they think
that it's a new thing.
I mentioned in my Ted talk,
people think that they're keeping Pandora's box
from being opened.
What they don't realize is that every US military base and aircraft carrier is protected by autonomous
weapons that shoot down incoming boats, incoming missiles, incoming aircraft. They don't realize
that destroyers are all capable of operating in a fully autonomous mode, even if the bridge is
completely destroyed and not a single person
is living on the top side of the ship.
And you go back to World War I and World War II.
That's right.
For all of the landmines that were, those were autonomous weapons triggered on their
own.
We go back even further.
I've given a couple of talks where I argue about this idea of building weapons that execute
the intent of the designer, even when the person is not immediately physically present,
that goes back thousands of years.
Spike traps, pit traps, poison wires,
all of these are autonomous weapons.
Now, AI allows you to do new things,
but I mean, also like in Vietnam,
we were using missiles that would be fired from a jet,
fly into an area, for for example surface to air
missile launchers and then destroy them. Those are those are fully autonomous weapons. They're
deciding which targets to hit, which to destroy, and they're discriminating between one type of
target and the other. And so yeah what I mentioned is that you have to look at these on a case by
case basis and not have a blanket prohibition on AI,
autonomy for any, you can't have a blanket prohibition. Imagine if you could say,
hey, I can make, take this, I can take this landmine and it's an anti-vehicle landmine.
It's not set off by people. It's set off by vehicles. Right now, it can't tell the difference
between a school bus and a tank.
Yeah.
Why?
Would you want that?
Why would you want that?
Yeah.
There are people who are fighting for that.
They want a UN level resolution
to condemn the use of AI and weapons
to make it illegal for a robot to pull the trigger.
And my point to them is,
if you're going to use land mines,
shouldn't they be able to make that difference?
Shouldn't you be able to use every tool
to achieve the most precise, most surgical,
least civilian casualty attached outcome?
And they'll say, oh, here's why I don't believe that.
And my point to them is, if you have a problem
with landmines, ban landmines.
Don't ban landmines
from being as good as they can be at not killing civilians. And it's the same thing with a
bomb. If I can make a bomb that using autonomy does not kill the person who's a hundred yards
over to the side of the guy that I need to get rid of. If I'm taking out the head of
Al-Qaeda, isn't it better to have something that kills
that guy and doesn't blow up the building next door? There are people who would argue,
no, it's such an ethically fraught problem. They can't deal with how icky it feels to have a robot
decide who lives or dies. And my point to them is guys, the deity has a process for this that they've
been applying for decades. The key is to never abdicate human responsibility.
A person always needs to be responsible
for how force is used.
When that AI weapon kills the wrong person,
there needs to be human accountability
as if there was a person pulling the trigger.
That is the thing we cannot afford to compromise on.
Banning AI wholesale is just going to ensure
that one, we lose, and two, that we're fighting
with our hands behind our backs and a lot of civilians are going to die as a result. That is
not a moral outcome in my opinion. I have to admit, I mean, you've been on my stage at the
Abundance Summit twice now and the first time you were on stage, I was a bit nervous about how
the audience was going to react. Sure.
Right?
And it was like just standing ovation.
People were completely won over by that argument of,
if we're going to get into a war, if we're
going to aim to kill somebody, let's
make sure that the collateral damage is completely minimized.
That's right.
And let's focus on the intention.
And there's very, what arguments have you gotten against that? Because I can't imagine one that would, would win.
It's different since in philosophy, like, I think, okay, there's perhaps I could steal me on this. There are people who will usually argue one of two things.
Either they'll make a purely philosophical argument,
but like it is not the place of tools to rebel against man.
You know, we cannot, there are certain things
we cannot outsource no matter the cost of life.
They would rather civilians die today
than outsource these decisions to AI models.
And it's just a difference in philosophy.
It's not, it's, I think that minimizing civilian deaths
is really important.
There's other people who I think take a more
existential risk approach.
Like you're familiar with the X-risk people
in the AI community.
They say, I have no problem with the landmine, okay?
The landmine that doesn't blow up the school bus full of kids, I have no beef with.
But first it's the landmine, and then it's the gun, and then it's the nuke, and then it's Skynet,
and it wipes out everybody. And so, but my point to them is, look, that just isn't how the DoD looks
at these things. It is, these are usually people who are not familiar with how the DOD actually makes decisions.
It's hard enough for me to get AI into that landmine.
That's actually hard.
There is such an extraordinarily stringent review before they deploy new weapons.
Here's a great example.
There was a new landmine that was capable of a fully autonomous mode that was developed
during the 90s.
It was developed by the United States Army.
And it was capable of basically, it was basically a sensor that could trigger remote mines around
it and it would detect what kind of vehicle was and blow up if it detected it.
They actually disabled that capability in the final version of it because they couldn't
figure out how to attach responsibility for malfunctions.
They couldn't figure out how they were going to say who's responsible. Is it the guy who
ordered the mine deployed? Is it any time that the instructions to it are updated or the
categorization is updated that it's responsibility? Is it the contractor who develops the differentiation
model whose liability for civilian casualties. The military is actually fundamentally very conservative.
They don't take these crazy risks.
And so people imagine there's a slippery slope to Skynet.
Remember that our nuclear arsenal until a few years ago
ran off of floppy disks.
They were so conservative, they didn't even want to move
to digital circuits controlling these things.
And they kept it all analog.
Given that, I'm just not that worried
about the slippery slope.
I think the people who are in charge of these problems
are very sharp.
And if you don't believe in the process
that puts these people in positions of power,
then you just don't really believe
in the democratic process period.
I mean, look, the alternative is you have people who are making all of these decisions. Flaunt as they are. Well, and my point is, look, if you trust
a 19 year old kid to not nuke the wrong people, I'm just kidding. That's a ridiculous, sorry,
that's a little ridiculous. 19 year olds don't get the nuclear keys. You know, it's people who are a little more senior.
But you get my point.
If we are trusting people, young men with decisions of great life and death import,
it seems a bit strange to me to say, oh, well, I think that the system as a whole though
is just going to trend towards irresponsible use of force and the machines are going to kill us all.
I understand the ex-risk people, but similar to,
I don't know how you feel.
It's the same thing where people say this about AI,
that it's nothing to do with weapons.
They say, we shouldn't develop AI to help us with physics
because what if it develops new physics
and then it uses those to exterminate humanity?
And I just, I'm a lot more worried about evil people
with existing AI.
It's not artificial intelligence,
it's human stupidity I'm worried about.
Yes, the AI part is the part I'm least worried about.
I'm worried about bad people using good AI,
not super AI turning against everybody.
All right, a lot of people don't realize that the tech you've been developing has some significant non warfare applications and here I'm pointing
directly at the prevention of perilous wildfires. Yep. So two years ago, two and a half years ago,
very proud we were in DC together.
You were the first registrant
for our $11 million wildfire prize.
You really pulled together a lot of people.
You had the Lieutenant Governor of California there,
the head of CAL FIRE,
a lot of people from the US Forest Service.
It was a great event.
And it was valuable to have you step up as our first
registrant and even more valuable for what you said, which is these, and God knows six months ago
we were all, you know, front row seats to the Palisades fire. That shouldn't ever happen again.
It shouldn't have happened then.
It shouldn't.
I mean, the really crazy thing is, and you know this, but not all your listeners might.
Androl started working on firefighting technology right at the start of the company.
We built the Sentry Fire Fighting Tank. It was a tracked autonomous firefighting vehicle that
could continue to fight fires even after
a fire had overwhelmed an area.
So continue fighting long after all the people have shipped out.
And the problem we ran into was actually purely political.
It's that people were afraid it was going to replace jobs, automate jobs, and they were
saying, if you fund this, then we're going to come out against you politically in the
upcoming elections.
And that was really a big problem.
I think we should have been working on,
and I think that similar things for even that,
the whole point of that for people to know,
the wildfire X prize is to end destructive wildfires
using autonomous technology,
build things that can detect and react to fires instantly.
And the problem was people,
not everybody wants things to work that way.
There are people who don't necessarily
want to stop wildfires because their job is tied
to fires continuing to exist at scale.
And that's the hardest part of this.
Well, look, I mean, if your whole job is to do,
let's say large scale firefighting tanker operations,
you're not gonna be excited about giving money up
in your budget to build something
that stops that from ever happening.
It's a perverse set of incentives.
Yeah, I don't think any of these people are waking up
in the morning saying, ha ha ha,
I can't wait for there to be more fires and more deaths so that I can get my budget.
But they're not going to ever want to take risk if even a successful outcome is one that
is probably bad for them.
But I mean, I said it at the event, I'll say it again, we can do this.
This is not a distant future.
This is not a super intelligence problem.
This is a matter of product execution.
The tech to detect and exterminate destructive wildfires,
maybe not in all of them,
but let's say 95% of them, it exists today.
We just have to put the pieces together and demo it.
And like, I think the evaluations for the prize
are coming up in October, so not that far away.
But a whole bunch of companies are put, like the most interesting.
And you've seen the companies.
I mean, you're a registrant right now.
Oh, yeah.
And we're teamed up with some of them.
The coolest part about this prize is, unlike maybe some other X prizes,
where there were a bunch of people trying to figure out if it was even possible and pushing the limits.
In this case, I think there's actually lots of companies that are proving that it is very much possible.
It's now just a matter of cost and effectiveness.
How much will it cost to do this? And so everyone's trying to drive down the cost, increase the effectiveness.
Nobody, I don't think, I think all the teams are in a place where they've proven it can work. Have you publicly come out to say what tech you're going to use?
I don't think we've publicly gotten into too much. The plan right now is we're going to use,
we're going to demonstrate multiple things. So, and this is how I think it'll be in the real world.
In the real world, you're not going to have one type of solution. The thing, you know,
the aircraft that will respond to a fire that's on a hill out, you know,
a hundred miles out in, you know, in the brush, very different than one that is, let's say,
you're going to start in a power substation right next to a bunch of trees.
It's just, you'll need different tools for each job.
And so what you need to do is detect fires, classify what kind of fire it is, and therefore
what kind of firefighting agent you need.
Like, you know, even with-
And what the environmental conditions are in terms of wind.
Exactly.
How you can approach it.
Exactly.
How far is it?
What's the closest asset?
What type of compounds do you need to fight that fire?
And you actually need a system that then autonomously decides
which of these available assets is the best
to stop this particular problem.
Then deploy it and then see through.
Did it work?
Did I slow the fire?
Did I stop it? Do I need to continue to deploy assets? Do I need to actually have the big
guns come out while I try to just camp this down?
And so I think our plan is we're going to have multiple Andral assets, different types
of assets with different type of capabilities, and then demonstrate how different types of
fires trigger a different response from the system. I also think that's how it's going
to work in the real world.
If Andral were to deploy, let's say, a lattice instance
with sensors and fire watch towers and space-based layers,
if people say, why not just do it all from space,
the answer is sometimes you have weather
that makes it impossible to see what's going on.
You might have a lot of fog.
You might have a lot of fog, you might have a lot of clouds.
And so you probably need some terrestrial layer as well.
But if you were to build all that,
I think Andrel's probably not gonna be building
all the vehicles that respond.
I think you're actually gonna see
a lot of different companies focusing on their niche.
You'll have some people building a vehicle
for more urban type environments,
you'll see others building it more for long range.
And what I love about NextPrize is it's a Darwinian evolution
where you have hundreds of different approaches all
competing.
And at the end of the day, like you said,
you probably will collaborate with a number of them.
I think that in the end, probably a half dozen
of the teams that are competing, if CAL FIRE were to award
a multi-billion dollar contract
to deploy these systems at scale and stop wildfires,
I am very sure it would not be any one of us
getting all that money.
It's going to end up being a distribution of money
to people for a lot of different things.
And the thing that is insane is when you have these Malibu
fires, these Palisade fires over the years,
and then it's impossible to get insurance for your home.
Well, and the cost, because the cost is so immense
and the requirements California's put on these insurance companies
is also so immense, what blows my mind is when people,
we've talked to people in like, you know, Cal OAS,
Office of Emergency Services,
and I'm not putting them down,
they've got a lot of constraints they have to work under.
But what's so fascinating is when we talk about the cost
to deploy a system that would detect every fire
and put out many of the fires,
they say, oh my God, well, that's billions of dollars,
where are we gonna get billions?
And my point is, if you stop even one fire,
you've already made the money back.
You've already done it.
It's just one.
Lives lost, property lost, time.
It's crazy.
And the lives are irreplaceable.
But even if you look at it just in dollars and cents,
stopping one of these fires would pay for the whole system
in terms of the economic damage. And so it's one of these fires would pay for the whole system in terms of the economic damage.
And so it's one of those things where it seems like a lot of people are being penny wise
and pound foolish.
And we have to fix that.
I'm going to ask you a rapid fire set of AMA questions from my Twitter audience.
Let's do it.
All right.
I'll be efficient in my answer so that we can hit as many of them as possible. Can a drone fly in formation, hit the land, transform into a robot, two or four legs or wheels,
recon and attack on land? Do you imagine sort of a mixed mode set of drones?
Such a thing is definitely possible and I've actually seen companies that are building
exactly this. I've seen companies building quad copters with legs, for example, where they land and most of them,
they land on their legs so they can loiter for long periods
like as a watch capability.
I've seen people building robot dogs
that basically have jet packs.
I love that.
I've seen the gamut.
The thing is, I'm not gonna say any of these
don't make sense.
It's really a matter of how many situations.
Need both need both.
Exactly.
Like why not just keep flying?
Why not get there just walking?
Or here's another example.
Why not put the robot dog onto a flying vehicle that drops the vehicle in place and then you
don't have to carry all of that extra parasitic weight that it is always possible to come
with some niche scenario where you know, oh, I need it to fly, land, walk into a cave, jump over a hole, fly out the other side.
Those do exist. But here's the good news. Making these different niche robotics,
I think it's going to be a big part of the future. There's not going to be one form factor that
dominates everything, right? You're not going to see C-3PO style humanoid robots doing literally everything.
There's going to be hundreds of different form factors and I bet some of them will have wings
and legs just like in nature. All right, next one, a serious one. Between US military tech
and Chinese military tech, is America behind? There's places where America is behind. There's places where we're ahead.
It's hard to give a universal answer.
In general, I think the United States has a strong lead
in a lot of the areas that people would consider critical.
But at the same time, you have to look at the fact
that China has about 300 times more shipbuilding capacity
than we do.
Insane. People can't visualize that.
I saw your tweet.
And by the way, that's not during wartime. They say, oh, well,
America would just scale up during wartime. Well, so will China. And they've proven that
they can do it. They've also, like, I'm not saying we should do this. I'm not saying we
should copy every move of an authoritarian, centrally planned state, but China has made it a law that many types of boats, for example, passenger ferries, car ferries,
they can be commandeered and they have to build every passenger ferry to military specifications
so that it can be used for a Taiwan invasion. All of their car ferries have to carry, have me a
certain deck plate load standard so that they can move armored vehicles onto them and move them to Taiwan. That's
an advantage that they have. And so are they ahead of us on like
amphibious landing capability? immeasurably so by orders of
magnitude.
You had a tweet that I found particularly interesting about
the importance of a Navy in projecting global domination.
Yep, and also protecting freedom of movement, freedom of trade. It's just,
our Navy is kind of the backbone that allows global commerce.
Yeah. All right, here's one you may or may not want to answer. What are your honest thoughts
about Mark Zuckerberg? What are my honest thoughts about Mark Zuckerberg? Well, I
mean, the subtext there that people may not be picking up is
that Facebook acquired my company in 2014. I worked there
for a few years on VR. My company was Oculus VR. And then
I was fired after giving money to the wrong, the wrong
political group. The libertarians don't know.
But I think people need to realize that what,
look, I'll put it this way.
I took every single liquid dollar that I had
and bought into Meta stock the day that they announced
they were changing their name to Meta.
Mark is the number one VR fan in the world.
It's a title I wish I could have.
I wish I could be the world's number one VR boy,
but I'm not.
Mark spent $60 billion on AR and VR.
He beats me handily and he's done so through immense pressure
from people who don't understand his vision
or where he's going.
And so, look, whatever beef I might have with Mark
over other items in general, I think,
he understands the future.
He's resisting extreme and severe pressure
from people who don't understand his vision.
I think he's done, he's been very practical, he's been very pragmatic in his engagements with the government,
even to the detriment of his press coverage and the attitude that people show towards him.
And the thing that I like, that I've had to come to terms with is, it wasn't Mark who fired me.
It was the apparatus that was under Mark.
And one of the things I've had to come to terms with
is that the people who ousted me,
the people who orchestrated my destruction,
who seized my baby from me,
they're not even at Metta anymore.
It's been eight years.
The people who conspired to stab me in the back, they're gone.
And so, you know, can I really be upset
at the corporate structure that remains behind people?
Like, you know, am I mad at their ghost?
Am I mad at the ghost of the people
who once walked the halls of Metta?
And so I'd say my opinions have varied over the years
and this is probably more than I even
should be saying about it.
But in general, I have a lot of respect for Mark
and there's been times where I've been a lot more
upset with him than the present.
And a lot of that came down to through a series
of unrelated litigation,
it became very clear in the discovery process that it was not Mark who had stabbed me in the back.
It was people who were much closer to me.
Well, some could say that
Anderil exists now because of that action and the world is a better place,
or at least the United States is a better place because of that.
That's an argument that's been made. The point that I make to those people is,
if a guy got shot in the head by a burglar,
and then he gained superpowers,
he became super naturally intelligent,
the guy still shot you in the head, right?
I'm not gonna say, oh, but so I understand that,
but the point that I would make is look like the thing is, yeah,
Mark was in charge of the company at the time, but imagine this, you're the executive of
a major company worth hundreds of billions of dollars.
The people who you trust come to you and say that the people that they trust have come
and said, we have to fire Palmer.
There's no other way around it.
This is the only way to handle the situation.
What are the odds that you're gonna go and say,
you know what, I think that the people that I trust
are being lied to by the people they trust.
The entire thing is a farce and that they're doing it
for purely political reasons.
I reject you and I override this decision two levels down.
That's not how the real world works.
At the same time, you got a thousand other problems
going on that you're dealing with.
A thousand other problems. And I hate to say it, but that's probably the decision I would make in
my company. If I had people coming up multiple levels through and they said, this is the only
way that this is going to work. Here's what's going on. My first instinct is not going to be,
I think that you are all lying to me. Maybe I'm not saying to people who talk to Mark are lying. I'm saying you go far enough down the chain.
It's hard to say.
I think everyone is actually engaging in an orchestrated coup
based on false information to run Palmer out of his company
so that we can seize power and get more money
out of the performance bonus fund that he will not get access to
if I blow him out.
That'd be a crazy thing for you to perceive from the top.
And so as someone who's now running an organization with 4,000 people in it,
almost 5,000 people, I'm very sympathetic to the realities of large companies.
Bitcoin, how much do you love it?
Do you own it?
What are your thoughts on it?
I'm a big time Bitcoin guy.
I have been from the beginning.
I have been mining my own Bitcoin
since before there were-
Of course you are.
People have often asked,
when did you buy in?
I didn't buy in, I mined in.
Nice.
And I've been doing that
since before there were any exchanges.
I was on the BitcoinTalk.org forums.
I sold a banner ad on one of my websites
for 700 Bitcoin.
I remember very vividly in my,
and my website was like a little crappy internet forums and I still did that. one of my websites for 700 Bitcoin. I remember very vividly in my,
and my website was like a little crappy internet forums
and I still did that.
I remember very vividly going to an online Bitcoin slot
machine and betting 60 Bitcoin on one pole.
Nice.
Didn't work.
And, you know, I was part of the Mt. Gox hack.
I lost all of my coins that were in Mt. Gox
and then 10 years later, I got like 13% of them back,
you know, through the recovery process.
So, I mean, I've been in Bitcoin
since the very, very beginning.
Of course you are.
I'm a huge fan of Bitcoin
relative to other cryptocurrencies.
I've often said there's two kinds of crypto.
There's Bitcoin and shitcoin.
And it's a long discussion as to why I believe that.
But I'm a big fan.
And I actually originally got,
became interested in cryptocurrency
because of an essay by Jim Bell on his website,
the Outpost of Freedom called Assassination Politics.
And it was about how we believed cryptocurrency
would reshape world politics, the insurance industry,
the military governments
across the world.
Jim Bell was arrested and sent to prison for being a terrorist later and also didn't pay
his taxes.
So a very interesting guy.
I'm not saying he's my hero, but I am saying he did predict Bitcoin and many of the impacts
back in 1996.
That's when he wrote assassination politics in 1996.
I highly recommend it to anyone who wants to read what someone who is very ahead of
his time, though on the fringes of society, was thinking about crypto before anyone else
was.
Fascinating.
All right, here's a fun one.
Would you ever consider or would you ever buy
a defense prime, Northrop Lockheed?
I won't rule anything out,
but I suspect it won't make sense.
We are in the same industry,
but we're very different businesses
and our investors are very different.
I talked earlier about how you have to attract
a certain type of investor, repel another.
Their type of investors,
compared with our type of investors
and in terms of what they want us to be, I think it's a bit like oil and water. type investor, repel another, their type of investors, compared with our type of investors
and in terms of what they want us to be, I think it's a bit like oil and water.
And we'll team up with those companies.
We do frequently.
We're selling rocket motors to some of those companies.
They're supplying payloads into some of our systems.
So we'll work together.
But I think to actually bind our fates in that sort of way, it would be, it'd have to be exactly the right mix.
And I think that only happens if the world changes a lot.
All right, I'm gonna put you on the spot here.
It's a conversation we've had over dinner
on a couple of occasions.
We just awarded a hundred million dollar prize
that Elon funded for carbon extraction, which was amazing. The winning team had some brilliant approach. I
didn't see that.
Yeah, I was at time 100 last week.
Is the carbon that's recovered just stored or turned into some
kind of, can you turn it into a synthetic long chain
hydrocarbon?
Yeah, so there we we had 1300 teams enter that competition.
Yep.
From 88 countries.
We awarded six of them part of the 100 million.
One team called Maddy Carbon got 50 million.
I handed the guy a $50 million check on stage.
Now, this guy is amazing.
He's living in Houston, born and spent much of his life in India. And they're actually using a technology
for rock weathering.
Okay.
So it turns out that basalt.
Yeah, and then it absorbs.
It absorbs the carbon.
But what he's done.
It's crazy, you just bust up the rocks
and they absorb. To find powder.
Are they doing it with an atmospheric process
or in a water process?
No, they're basically spreading it on farmland.
Oh, fascinating. I was mostly familiar with like maritime weathering projects that use ocean
as the carrier for the carbon, but atmospheric weathering is interesting.
It increases crop yields by 20 to 30 percent.
Oh, because you're pulling in all of that carbon, which is just pure plant food. Also water retention and it's, so he's been building it out in a number of nations and
he's just going to spend the money.
And I just introduced him to an incredible philanthropist that's going to just 100X what
he's doing right now.
So it's a beautiful one-two punch.
The 100th anniversary of Lindbergh's flight is coming up.
1927 to 2027. I'm a huge fan of of Lindbergh's flights coming up, 1927 to 2027.
I'm a huge fan of Charles Lindbergh.
I have a signed portrait of him
that my grandfather gave me before my grandfather passed away.
He was a airline pilot for 40 years,
and Charles Lindbergh was his hero.
And I've been to Lindbergh's grave out on-
In Hawaii?
Yep, out in Hawaii.
Well, Eric Lindbergh, who's one of my trustees
at the X Prize, I'd love to introduce you to him. He's amazing. I had no idea. Yeah, grandson and when I
announced the, so the original X Prize for spaceflight came out of the Spirit
of St. Louis book, right? I was reading about this $25,000 prize and it sparked,
you know, the aviation explosion and Lindbergh, the most unlikely guy, pulls it off. Long story short, we're looking for Eric Lindbergh and the Lindbergh Foundation want to fund
or put together a massive X-Prize again.
So we're looking for what's a big bold idea.
A big bold idea.
A bold idea that we should build an X prize around.
Interesting. Yeah. So I'm you last asked me the same
question, I think four years ago. Yeah. And I'm trying to
remember what I said. I think you said tuna farming was one
of them. Tuna farming was one large scale aquaculture of
species that are on the precipice. And what else did I
say? You talked about upleveling animals.
Uplift.
Uplift, yeah.
I'm still a human.
I mean, can you bring a non-human species
to human level sentience?
And I'm not sure what the right bar is.
Like, it's probably not the Turing test,
because even an intelligent species was probably
going to think so differently.
Well, you and I are both fans of Ben from Crossel, right?
But I mean, what if I could get an octopus to an IQ of 100, which is the human intelligence
measurement?
And it's not far off, probably.
It's maybe not.
And there's a lot of-
I've stopped eating octopus because of that.
And there's a question.
You don't have to do this naturally.
We understand what- Octopus is actually one of the hardest
because we understand them so little.
But like for birds,
we understand what the common traits of the smartest birds,
even in a local population are.
You see more brain folding,
higher surface area on the brain.
And we also know,
and you mentioned colossal,
they know how to modify animals
to produce exactly those effects.
And so it's not like we need to come up with from scratch.
We just take the things that we know, make animals smarter.
Another example is like dolphins.
They have very high glucose brains,
very similar structurally to humans.
If you were to make a few choice modifications,
you could probably massively increase their intelligence
with just a few edits.
And people say, well, how come they didn't evolve that way
then Palmer?
And the answer is because there was,
that's not how evolution works, right?
You need to reproduce to be fit,
not necessarily be smart.
Give them some time.
Yeah.
Right, well, and there were no natural environments
that would favor them necessarily dedicating
even more calories to being more intelligent.
Humans have developed in a very complex environment where using tools, working as social animals
is critical.
The ocean is a relatively sparse environment.
And so there's a question as like, one of my favorite ideas is what would happen if
you took even existing marine mammals, like if you took a whale, and you put it in a VR headset and trained it to tele-operate a humanoid robot?
Could you train a mammal to interact in a much richer environment that requires tool use and
collaboration at a manual, physical level? I'm not so sure it wouldn't work. There were some NASA projects back in the 50s and
60s where they tried to have various animals interact with people and raise them from birth
around people to see how smart they could get them and if that would be relevant for spaceflight.
And there's quite a bit of sci-fi that suggests this fun idea that maybe humans
are not the optimal earthbound species for space flight.
It's not a crazy thought.
Well, your homework assignment is,
keep thinking about this challenge.
Keep thinking.
What is a challenge that-
Come up with another challenge?
That would spark people to take risks, but is not,
so one of the people say
well how about new york to london and you know in in 60 minutes or hypersonic flight yeah the cost
for a team to take that on as an x-prize challenge is just a fundraising competition i mean another
one is probably like maybe you guys haven't done an interspecies communication prize have you been
we have talked about it and we've been trying to raise the capital for it, but I think that's
a great XPRIZE.
Because one of the things that you've seen...
The Palmer XPRIZE.
Well, what's interesting is you're getting to that team idea.
The money to tackle something like that 10 years ago would have been just unthinkable.
Now it's two guys in a...
Yeah, and some clever ideas in a GPU and an AI model.
You might have seen Google just released,
I think it's Dolphin Gemma, which is,
so they've adapted their dolphin translate,
and they actually, they're working with the
with the wild dolphin project.
And I've actually given a lot of money
to those guys over the years.
It's,
Well, if you want, if you want to do that,
that would be a great one.
If you want to do that XPRIZE,
we're ready to run with that one.
Now here's the question. Are you guys going to prohibit me from using the dolphins we translate?
Is there going to be a prohibition from inducting them into the United States Navy?
Absolutely not.
The Navy has a large dolphin program. Not any large. They've spent a lot of money over the years
and have a small dolphin program that consumes a lot of money. But probably the best experts
in the world in terms of dolphin psychology are actually in the United States Navy.
Well, I do think an interspecies prize for dogs, for birds, for...
Could you imagine a commercial, a potential of being able to talk with your dog?
Oh my God.
It'd be a trillion dollar company, just like that.
It'd be huge.
Real quick, I've been getting the most unusual compliments
lately on my skin.
Truth is, I use a lotion every morning and every night
religiously called One Skin.
It was developed by four PhD women
who determined a 10 amino acid sequence
that is a synolytic that kills senile cells in your skin.
And this literally reverses the age of your skin. And I think it's one of the most incredible products. I use it all the time. If you're interested, check out the show notes. I've
asked my team to link to it below. All right, let's get back to the episode.
So another AMA question here is will aging warriors be able to keep fighting using robotic technologies?
Oh, absolutely.
I mean, one of the interesting things about special forces is that they actually tend
to be much older than the conventional forces.
And people often don't understand that.
They imagine that these must be like the youngest guys at the peak of their athletic prowess.
It turns out that what you more often need in the special forces operations is people
who have unique experience, a lot of hard fought, hard won lessons implanted in their
brain.
And the thing that takes them out is that you do still need a certain level of high
physical competence and excellence to survive on the battlefield.
I think it's almost inevitable as you have more
and more resources shift to robotic systems,
remote systems.
Exoskeletal systems.
Well, I think it's exoskeleton systems, but I mean,
I'm not even sure that it's, you know,
putting old guys into exoskeletons.
I think you might have more like the wizard approach.
Oh, a wizard doesn't fight through strength.
He doesn't imbue his limbs with force
so that he can use a sword.
He fights through other means.
He fights at a distance.
He perceives the battlefield well enough
that he can act in other ways.
I suspect that if we do our job right at Anderil,
we should make physical prowess, maybe not irrelevant.
I mean, you still gotta be able to walk around
and you still gotta be able to get in and out of your car.
But I don't see a reason you couldn't have someone
who's much older or who has a missing limb
or missing limbs, people who today
can't operate effectively.
I would not at all be surprised to see them be able
to stay in service much longer.
The question then becomes, how do we keep them in?
Because right now, it's really hard to keep people, especially who have had life changing
injuries, people who are getting much older, who maybe want to focus on raising families.
So it's two things.
If we're going to keep that experience in the military and keep those guys in maximum
utility, we need to make tools that allow them to safely keep operating into later in life.
And we also need to figure out how we can pay these guys
and give them good enough benefits that they don't depart
the armed forces for very practical pro-family reasons.
Because at the end of the day, most people,
they want to do well by their families.
And we can do a much better job of keeping and retaining
those people.
We just got to give them better fit benefits.
We got to pay them more.
You do that, you'll keep them.
All right.
Next question from the ex is could a neural linked trigger finger fire faster than your
nervous system?
Absolutely.
It's without question.
There's an enormous amount of latency in the link from your brain all the way through your
peripheral nervous system out to your finger. I actually, I've talked about this before. Absolutely. It's without question. There's an enormous amount of latency in the link from your brain all the way through your peripheral
nervous system out to your finger.
I actually, I've talked about this several times,
but years ago I actually built a peripheral nervous system
bypass to test exactly this.
And I wasn't going directly to the brain,
which is what would be fastest enough.
I was just basically triggering it off of a muscle
that was basically a jaw muscle. And it turns out that your jaw and tongue muscles
are much lower latency than your fingers are
all the way out at the end of your hand.
The nervous transit velocity is much higher to here
and literally the physical length of the link
is just much shorter.
And so you actually need very good control
of your tongue and your mouth to not bite your tongue.
Like try chewing sometime, notice how crazy it is
that you're basically opening your mouth,
shoving food into the hole with your tongue.
And then as you bite down, your tongue pulls out just so
and you do it hundreds of times in a meal
without even thinking about it.
That coordination is crazy.
So what I did is I made a system that would,
I could click my mouth by flex I could click my mouse on my computer
as a proxy for your trigger finger
by flexing a muscle in my mouth.
And in doing so, I had greatly reduced latency
in playing first person shooters.
And it turns out that that just totally works.
You can trim a lot of your reaction time right off
by just using different muscles.
And that's not even directly to the brain.
It's also worth noting,
you don't just have to go to the brain.
It turns out that nervous signals are,
they're kind of a mix of chemical and electrical signals.
And those of you who've been in high school chemistry,
you might remember that most chemical reactions
happen at an accelerated rate when you up the temperature.
And so one thing you can do to reduce peripheral nervous system latency, it's just heat up
your arm.
The whole thing, if you soak your arm in hot water to a very uncomfortably high temperature,
it will be very uncomfortable and your reaction time will actually go up for clicking a mouse
or pulling a trigger.
So I've actually pondered the idea for years of like a product like the Magma Sleeve or something.
And like you're playing your game,
it's down to the last round.
You're like, oh no, I really got to pump it up.
And it would just super heat your arm
to the point of getting first degree burns
if you did it for more than a minute,
but it would give you that little last bit of extra edge.
I think that'd be a really interesting product
for somebody to do.
Another question related to video games. A lot of kids are playing a lot of video games today.
They sure are. How do you feel that, how do you think about that for the next generations coming?
Good thing, bad thing? Was it valuable for you? Is it distracting from education? So the parents who've got, like me,
you know, teenagers who are loving their video games
and it's the focus and obsession.
What's your advice?
I struggle with this because I love video games.
You know, I started the Mod Retro Forums
game console modification community when I was a teenager.
We just launched our first product after 17 years,
which is a clone of the Nintendo Game Boy Color.
Look, I love games.
I spent a lot of time as a teenager playing games.
I mean, thousands of hours.
And on the one hand,
it makes me worried when I see the current generation
spending all this time playing Fortnite and Minecraft. And it makes me worried when I see the current generation
spending all this time playing Fortnite and Minecraft,
Roblox when they could be doing more productive things.
But then I remember, I mean, I did the exact same thing.
And so it's hard to know where is the value.
There's connections being created
that are valuable in other contexts.
Are they more valuable than things they could be doing
if they were doing sports or books?
I feel like I'm just becoming the old person, right?
On the one hand I said, oh man, I'm not going to let my kid play games like I did.
You know, all of these kids today, they're all iPad babies.
But then, you know, Socrates supposedly said, you know, what are to become?
Look at the children today. They have no respect for their elders or their society.
They riot in the streets, inflamed with wild notions,
pursuing their own desires.
What is to become of them?
And I realized, you know,
people have been saying this for like 2000 years.
They're like, oh man, this new generation,
what's gonna become of them?
And they seem to generally turn out fine.
And so I don't feel like I'm in a place
to be able to speculate beyond saying,
you know, it's probably going to be fine.
It's probably going to be fine.
So you're up on the TED stage giving your most excellent talk.
And for those who haven't seen it, they should go take a look at it.
And you're wearing these glasses.
And there's a lot of speculation about whether or not your speech is being fed to you
in the glasses, are you?
I was cheating.
I had my notes up on my glasses.
The hardware in particular is made by a company
called Even Realities.
So I was wearing a pair of Even Realities G1 glasses
and it's a really remarkable product.
They've done a great job of making smart glasses
that really do look like normal glasses.
I mean, the arms are very, very thin.
It kind of hides the battery
and interface back in your hair, which I have huge hair,
so I hide most of that bulk.
The lenses look like normal lenses,
and it's giving you not just in one eye,
it's giving you a full stereoscopic little window
that's green only, so it's not full color, in one eye, it's giving you a full stereoscopic little window
that's green only, so it's not full color,
but it has a really great function
that can show you your notes,
it can show you your script,
you can pull up critical information and messages.
If someone were to tell you,
Palmer, slow the fuck down,
you can have that message pop up
and then you just see it.
Oh, okay.
I'm not saying I got a message like that,
but if I had, I'd be able to.
And so for people who don't know,
Ted doesn't allow any teleprompters.
They want you to memorize your whole talk.
They want you to just memorize the whole thing.
And I'm pretty good at this stuff when I prepare,
but I have to admit,
especially when you talk
about specifics, the number of bombers that were built in this specific year per minute,
or you're talking about the specifics of some technical item, it's really nice to have your
notes up so that you can refer to that and make sure you don't say something that has
everyone making fun of you for a year.
And pointing out, oh, what about Palmer where he said that China has 200 times, instead and make sure you don't say something that has everyone making fun of you for a year.
What about Palmer where he said that China has 200 times instead of 300? This guy barely knows what's going on. I'm a huge fan of augmenting human capability. I think that when you expand
your capabilities beyond what you were born with. And you can lean, you know, you kind of extend yourself out
into your phone and into your wearable glasses.
You augment your vision, you augment your haptic perception.
You're living what you preach.
I'm living what I preach.
And it was especially funny where there were people
who were like, but why would Palmer wear, you know, why would Palmer wear something like that when it makes him look so dumb?
And all I could think when I was reading these criticisms are, have you seen me?
Have you seen me? Do I look like the kind of guy who wakes up in the morning and says,
okay, first things first, what will people think of my outfit today?
I'm blessed with...
I'm glad you're wearing flip-flop shorts and a Hawaiian shirt on stage.
I mean, you know...
Look, here's what I've realized.
You know, I've been doing it for so long and I get away with it now.
And I think the point that I usually make to people is, look,
when you achieve success, you earn a certain level of eccentricity that is allowed.
And so if I want to be eccentric, that's okay to a certain level.
And I've decided that I'm going to put all of my allowed eccentricity points into my mullet
and into my clothes so that I can focus on other things and look how I want.
so that I can focus on other things and look how I want. And I've, you know, when I got this mullet,
I'd always wanted a mullet, my whole life.
How long ago did you?
How long did I want one? My whole life.
You know, how long did, when did you start?
A few years ago.
My mom would never let me get one.
And then I started dating my wife, Nicole, when I was 15.
So we've been together for a very long time.
We met at a debate camp at a law school in Maryland, Virginia as teenagers. And she didn't want me to get
a mullet. And then when we got married, I realized that there's nothing she could do.
I was like, oh my God, I can get the mullet. She can't leave. And so I got the mullet and
luckily she actually likes it now although
I'm way overdue for a haircut. It's it's out of control. I've
transcended mullet man to homeless man. But again, that's
the eccentricity that I've that I've earned.
I love that. Is there a favorite principle or mental model that
you live by that is sort of like a guiding set for you?
or mental model that you live by that is sort of like a guiding set for you. A guiding principle.
I mean, there's so many.
You know, Marcus Aurelius had a whole bunch of stuff in meditations that speaks to me.
I wouldn't say I'm fully in his philosophical camp, but you can pick and choose a lot of things.
You've quoted a few on stage, wouldn't you?
I have. What else?
I mean, I probably the one that I like most recently is,
you know, what Roosevelt said about,
it's not the critic who counts,
it's the man in the arena.
It's the one who actually bets it all.
It's the one who actually sacrifices and gives
and inevitably will fail
and he will get beaten up and bruised,
but it's his contribution that counts,
not the people who stand on the sidelines, not risk get beaten up and bruised. But it's his contribution that counts, not the people who stand on the sidelines,
not risking anything, picking apart everything he might have done wrong,
whether it's wrong or not. And I like to remember that especially when
things do go bad because they don't always go well. Sometimes you flip a coin
and you get tails. Along that line, one of my pet peeves has been
so many incredible people who've created
on tens and hundreds of billions of dollars of wealth,
who are sitting on the sidelines
and not betting it to make the world a better place.
You talk about the people who are cruising
the Mediterranean on their yacht.
You know, I've always wanted to take out a New York Times
page, I hate the New York Times and all traditional media.
But you know, these are the people who are working to make
the world a better place.
These are the people with the biggest yachts.
Brutal.
It is brutal.
I've asked myself a similar question.
I'm in a group chat called the B-Boys Club.
And it's all boys who have sold a company
for at least a billion dollars.
That's the membership criteria.
It doesn't exclude women.
Women would be allowed.
It's just, so thus far only boys are applied.
And so I've, for years,
like this goes back to when I sold my company, I got invited to the B boys club.
And I've tried to shame people and said, guys, you have so much money.
Why are you not doing what you know is the right thing to do with this capital?
Why are you not according to whatever your system of values is? I'm not even saying do what I want you to do.
Why aren't you doing what you know you should be doing?
And some people, this is actually, like they've,
I've had people tell me, you know, this really actually changed my thinking.
I should be doing what I know is right.
And there's other people who have said very clearly, you know what?
Racing old vintage race cars is extremely fun, Palmer.
I've paid my dues.
I am not in it to,
I'm not in it to stress like I used to
when I started my company.
And it's, my point to them is,
are you going to be Batman?
Or are you going to be you?
I'm not saying you, everyone should become a vigilante
crime fighter, but why wouldn't you use your resources
to do the thing that you know should be done?
As well as your intelligence, right?
Because giving away money is the easier part of it too.
That's a great point.
Well, money, intelligence, and network, and reputation.
Why was I able to start Androl?
Money was a small part of it. It
was because I was able to raise further money. It's because people believed in me because
I was a proven founder. I'd started a multi-billion dollar company. I had successfully exited.
That makes it easier to recruit people. You're right. These people, they're in a position
that no amount of money alone could do. You could take some guy off the street, give him
$10 billion, and he won't be able to accomplish half of what some of these people
would be able to accomplish.
I don't want to pick anybody in the B-boys club.
But imagine what would happen if, oh, I don't know.
Who would maybe be a good fit?
Who's a tech founder who's not really doing anything anymore?
Imagine that guy.
I've got examples.
I mean, with $100 plus billion and gone.
What if they announced they were starting a new company
to solve some problem, and they were hiring the founding team,
and they were going to build another many billion dollar
Titan?
They would attract the greatest players on the planet.
Instantly.
Instantly.
Yeah.
Right?
And there's, well, that's almost a free resource.
You don't even have to spend your money.
Just, you're just betting your reputation.
Yeah.
I don't know.
I mean, the flip side of it is, what do you do?
You leave your money to your kids to ruin their lives.
Or what I love even more is the irony of the giving pledge.
Sure.
Which I, you know, I've had this conversation with Gates, I've not had it with others, but
you're pledging to give half of your money to a nonprofit before you die that could sit
on the money and do nothing with it.
Well, you're basically betting.
I don't know.
I might, look, hopefully I don't piss anybody off who's done the giving pledge.
But the way that I look at it is, when you pledge to give away your money, what you're
really saying is, I think other people can do more good with my money than I can.
I think that my view of the world will be more
competently executed by others than myself.
And on the one hand, look, maybe there's people
that are like that, for real.
Maybe some of them, they are truly,
mentally not what they used to be,
physically not what they used to be.
But for many of these people,
I think that they absolutely could achieve their goals
better than handing it to a non-founder NGO to
go and do it. To a bunch of lawyers to go and-
Correct. And so let me give you the flip point.
That's why I haven't done it. My thinking is, look, I've got a view as to how the world should be.
And I don't think that there's anyone else who's going to more faithfully execute on that than me.
So my proposal is-
When I'm all old and used up, maybe I'll change my mind.
Hopefully we'll have some good longevity products by then.
But my view of the world is instead of that,
I want people to do an impact pledge.
Like I pledge to eliminate child slavery or hunger in this country.
Or go bankrupt trying.
Yeah, sure. And there are some amazing people, like Tony Robbins does this, Mark Benioff does this, and others, but where you call your shot.
Sure. And then you invite others to come and join you.
So imagine if you would sort of a list of all of the impacts
around the world.
That's interesting.
And then you can measure the results
by the quality of the outcome
rather than the number of dollars put in.
Right now it's, oh, well, he put a billion dollars
into this thing.
It's really hard to track that outcome
versus saying, I will end this disease.
To be fair, I think Bill Gates has done this.
Gates has done that for sure.
Despite signing the giving pledge,
he has also gone and said, here's my stake,
we're gonna eliminate this disease.
We are going to achieve this level
of carbon capture per dollar.
And that's actually pretty cool.
It's very real.
And for everybody else, for God sakes,
and I know you'll do this and others have,
is commit yourself the wealth.
I mean, because you can only spend
so much money in your lifetime.
So we have the ability to do such extraordinary things as humans
and solve so many problems.
Especially in the time we live in.
Yeah.
I mean, I know you do.
Like, it's so great to wake up and realize, wow,
there are things that I can do today, almost trivially easily,
that would have been the work of a lifetime just a few decades ago.
One of my favorite- Isn't that extraordinary?
The stuff you did between breakfast and lunch
would qualify you as a god 100 years ago.
Yep, it's so easy to lose sight of that
when you fall into the human routine
and you say, I have these problems.
I'm struggling to deal with this situation.
I'm having this family problem.
And those are all very true.
It's all very true.
It doesn't mean they aren't real, but there's some people who feel like,
oh man, I was born in the wrong generation.
I can't imagine being born any other time.
The only time more exciting today is tomorrow.
Yeah.
Of course, you know, I like to, you know, it's fun to imagine, you know,
what if to be born in the age of exploration, you know, what must have that have been like?
That was the only other time.
But even then, I don't think I would make the trade.
I could see you in my patch and sword.
I mean, look, what guy hasn't watched Master and Commander with his bros and said, oh man, that is something else.
Just imagine us on the high seas, having adventures.
How great would that be?
A true discovery, not knowing what culture would be
on that land over there.
Well, my plan is to die on one of the moons of Jupiter.
Right now, I've reserved the right to change my plan.
It's not like my life drive.
I'm not like Elon, where I must get to Mars.
But the thing I would like to do, all things being equal right now would be not die on Earth,
die on a reasonably colonized, reasonably terraformed moon
of Jupiter or elsewhere in our solar system.
Nice.
I feel like setting my sights on another solar system,
it's a bit much.
Yeah, I mean, the Jovian moons have really high radiation
belts there, so you might.
Just look, if you have enough nukes, it's not a problem.
Just generate a synthetic magnetic field,
bam, you're all set.
All right.
I don't know if it's that easy,
but I'm just making it up on the fly.
I'm sure people a lot smarter than me
are gonna figure it out.
But I'm gonna need to make a lot of money
if I'm gonna need to buy Jovian real estate.
I think it's gonna be in high demand in our lifetime
if everything goes well. Pomberville, I love it.
Oh, no, no, no.
I wanna live in a nice gated Jovian community
with a nice HOA, making sure the oxygen stays on,
make sure that our plutonium prices aren't through the roof.
Yeah.
That's the type of the world I hope I get to live on.
Point one G bouncing around, flying.
Oh, no, no, I'm a 1G guy.
I want 1G.
I want my full bone density.
I want my normal metabolic process.
Maybe like a 0.9G.
That could be kind of fun.
But having read up on the impacts of low G, I'm worried.
Have you ever, do you know Gerard K. O'Neill and the work
that he did at Princeton?
So he designed these large rotating space colonies
that were basically cylinders.
Sure.
I'm familiar with most of the weird designs.
You got tin cans on strings, you got the big rings,
you got the big cylinders.
And so the beautiful thing of course is,
at the center of rotation, there's zero gravity.
On the outside, there's one gravity.
So as you get older, you could sort of move up
the mountainside.
That's a fun idea.
You know, I had a thought for a reality television program at one point. It's
called fat flight. And it's you've seen my 600 pound life.
It's a show about people who are 600 pounds or more. And they
try to mode it's they're trying to motivate them to lose weight,
exercise, get gastro bypass surgery, like regain control
their lives. I had an idea for a show called Fat Flight,
and you would take really, really obese people
and to give them motivation.
These people who like are immobile,
they can't even walk around.
And you would put them on one of those zero gravity flights.
So I did that.
But you did this.
I did this.
So what?
So, I mean, you know, I founded Zero G, right?
The commercial operator here.
And there was a television show called The Biggest Loser. I'm familiar, yeah, okay. So I mean, you know I founded Zero G, right? The commercial operator here.
And there was a television show called The Biggest Loser.
I'm familiar, yeah, okay.
I don't watch it frequently, but I'm familiar with it.
And it was for people losing weight.
And we took at the beginning of their season,
we took, I think it was eight biggest loser candidates
who are 300 pounds higher into zero G.
Ah, so here's what I wanted to do.
I want to put them in a zero G plane,
build a living room in the plane,
and then you would fly on a like 0.1 G or 0.2 G
so they can stand up and they can walk around
and experience normal life.
Imagine what it would be like if you could walk again.
We gave them one six G.
Oh, that obviously-
We gave them lunar gravity.
You already did this then.
You can go find the episode someplace.
Man, see this is very much aligning
with what I usually say,
which is none of the ideas have ever come up with
or something that someone hasn't already done before.
You've already done it.
You've already done Fat Flight, bummer.
Yeah, well, my favorite flight of,
I did hundreds of flights.
My favorite flight was taking
Stephen Hawking up
in Deseroges.
That's gotta be fun.
That was extraordinary.
How did it end up being a 727 that you guys bought?
Cause that's always, that's my favorite airplane.
Yeah.
I wanna fly around myself in one someday,
but haven't figured it out yet.
So it was interesting.
We looked at 737s, we looked at 767s, and the wide bodies were too expensive for us.
The 737 was, the problem was that the fuel lines between the wing tanks and
the engines were very short. The 727 had three advantages. It had the rear air
stairs, right, which I love them because we could load and unload there.
And you don't need any infrastructure
wherever you're taking off from.
Zero, wherever we go.
We had center line thrust,
so three engines of 727,
so engine one, two, and three.
So when we're going into a parabolic flight,
we take engines one and three,
take it back to neutral,
and we would just throttle engine two
to have the perfect amount of thrust overcoming drag and
then the third thing was that the fuel lines... Oh it would be a huge control problem when you're
especially like in descent to have two 737 engines and match them up just so.
Exactly and then the fuel line between the tanks in the wings and the engines
were long enough that during the zero G portion,
the fuel in the lines fed the engines and you didn't run out. I see.
And so all those three things.
And then the final thing, our original business model
was we were going to be using
what we call sort of cargo airplanes and palletized interiors.
So cargo airplanes fly, you know, FedEx, DHL were flying at night.
Yep.
And the airplanes were sitting on the ground during the day.
Well, and in fact, I think like UPS, I think has a whole fleet of 727s that sit at Ontario Airport, not used for most of the year.
They only use them in during a holiday season. So they're just sitting unused. So I went to try and negotiate with all of the players
out there and finally said, no, we've got to buy our own plane.
So we fell in love with the 727 the same way you did.
It's a beautiful plane.
I mean, it's kind of like one of the-
It's a brick shithouse.
Well, and it was made by a Boeing
that knew how to really make airplanes for pilots.
And I mean, it's fast too.
You can cruise at Mach 0.97.
Yes, it's a great airplane.
What an incredible plane.
It's my dream to own a 727 and then put Volvo RM8s on it,
which was a license-built derivative in Europe
of the Pratt & Whitney, I think, JT18D.
Yeah.
PT18D?
No, I can't.
Anyway, there was a version of the engine in the 727,
one of the later engines.
And the difference was it was built by Volvo
with an afterburner on it.
Nice.
And so it would be a direct one-to-one swap.
And I could have a 727 with triple afterburning engines.
And I think that would be the-
The fire plume out the back of the rocket ship.
Wouldn't that be the coolest plane ever?
And the only runner-up would be the- The fire plume out the back of the rocket ship. Wouldn't that be the coolest plane ever?
And the only runner up would be, I don't know,
but Aeromex or Aeromexico.
They had seven two sevens with real FAA certified
rocket boosters.
Have you seen these?
No.
So when they would take off hot and heavy out of
Mexico city and an engine went out, they wouldn't
be able to maintain altitude.
So what they did is they actually built
solid rocket boosters only to be activated
by an emergency pull in the cockpit.
And they could pull them.
And I think it was four boosters would allow it to,
and the FAA certified this, which is so cool.
So, but one of my dream, they would never certify it today,
but one of my dreams is to find one of those old Aero Max
727s and restore it because all of those STCs
would be wavered, grandfathered in.
And I would then have a nice, you know,
nice Rado assist 727.
That'd be pretty cool.
Small aside, what are you flying these days?
So I'm mostly a rotary wing pilot.
I own a few helicopters. I got a UH-60 Blackhawk.
I have an Augusta Robinson. When can when can Lattice become operational for pilots? Oh already?
I mean commercial pilots. I want my heads up display. I want to be able to see fields in the distance.
I want to see the air.
It's like the entire ATC system is so ridiculously broken.
Look, I have to admit, one of the non-military problems
I would love to work on would be modernizing ATC.
It's unclear exactly what's going to happen.
President Trump has said he's going
to build a beautiful system like nothing anyone's ever seen. I would love
to be part of that. I'm not sure if it will end up making sense
for us to do so. But I mean, to your point, we should be giving
pilots full synthetic vision, full awareness, they should know
everything that's going on. You should never have, for example,
military helicopters running into into aircraft. It's just it's that's, that's, that we shouldn't be
getting within. It shouldn't even be within the realm of
possibility of that happening. And it's kind of crazy that with
all the tech that we have, that the things like that are still
happening. So we were building things for the military that I
think are oriented around avoiding those types of
situations, day, night,
weather, you name it.
I would love to see that tech fall in this million-
That is one place I think tech could flow back into the commercial world.
That's one where I think it could.
And I think the way I would justify it is these things are operating in close proximity.
Making manned aviation safer and commercial aviation safer
does actually make military aviation safer.
The two are so closely intertwined
in terms of operating out of the same airfields,
operating out of the same airports,
using a lot of the same support infrastructure.
So safer airports make for a safer military.
So that's maybe how I'll justify doing something
I wanted to do anyway.
Buddy, listen, thank you for your time today.
No, this has been a lot of fun. And thanks to everybody who sent in questions.
It's fun to get some off the wall ones.
I have a long, long list, but I figure it's a good place to break. And just outside of these,
our media room that we're in, our beautiful devices connected by lattice.
Well, their form follows their function. It's very interesting how a lot of the things that we build end up following natural form.
It turns people say, oh wow, you know, this looks like, you know, some, you know, some sleek,
you know, sleek predator. It's like, well, it turns out that predators have certain
characteristics inherent and it doesn't matter whether biological or technological. It turns out
a lot of them share the same characteristics, whether they're moving through air, water or, or land.
Yeah. It was a,
I remember I was with Bert Rutan at Scaled Composites and he was putting up an
equation for drag and,
and he added a term to the equation for drag C sub D,
sub B and, and like, what is that? And he goes,
it's the coefficient of drag due to beauty.
Hey, good looking airplanes are good flying airplanes.
Anyway, always a pleasure.
Always a pleasure, pal.
Thank you.
Hey everybody, thanks for listening to Moonshots.
You know, this is the content I love sharing with the world.
Every week I put out two blogs,
a lot of it from the content here, but these are my personal
journals of things that I'm learning, the conversations I'm having about AI, about
longevity, about the important technology transforming all of our worlds.
If you're interested, again, please join me and subscribe at diamandis.com slash subscribe.
That's diamandis.com slash subscribe.
See you next week on Moonshots.