Big Technology Podcast - Palmer Luckey's Ambitions For The Metaverse And AI Warfare — With Palmer Luckey
Episode Date: March 29, 2023Palmer Luckey is the founder of Oculus VR and the founder of Anduril, a military technology company. Luckey joins Big Technology Podcast to discuss Meta’s Metaverse ambitions, whether they’re misg...uided, and how Apple is poised to compete. Stay tuned for the second half, where we discuss the prospects — and advisability — of AI warfare, how the technology might spill over to police departments, and what a just war is. We discuss the prospects of sentient AI's threat to humanity at the very end. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
LinkedIn Presents.
Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond.
Palmer Lucky is our guest today.
He's the founder of Oculus and the founder of military technology company Andrew.
So just to catch you up, Lucky's Oculus is the backbone of,
of Meta's Metaverse strategy. He sold Oculus to the company back when Meta was Facebook
for $2 billion in 2014. Then it fired him in 2017. And a few years later, it changed his name
from Facebook to Meta. Now Lucky is working on Andrel, which uses AI and weapons and other technology
for warfare. I wanted to speak with Lucky for so many reasons. I figured first he'd have
terrific perspective on meta's ambitions in the Metaverse, whether they're misguided,
and he'd also have the ability to speak freely about the subject as an ex-employee.
I also learned he's used some version of Apple's forthcoming, AR, or mixed reality, or virtual reality headset.
We still don't fully know all the details about that.
So plenty of that discussion coming up in the first half.
I also find the concept of Andrel fascinating and worth challenging in some areas.
I wanted to speak directly to Palmer about what he's doing there, and that's coming up in the second half.
That conversation definitely takes some intriguing twists and turns, so stay tuned for that.
okay my conversation with palmer lucky coming up right after this palmer welcome to the show thanks for
having me thanks for being here really great to get a chance to speak with you obviously been following
your work for a long time and you know excited to speak both about oculus and then andriel so we can
have some time talking about both today meta lost 13.7 billion on reality labs last year and
it's planning to lose more this year so you know it's you're an interesting and it's a interesting and
an interesting position because they're so dedicated to investing in your vision and you're not there
anymore. So I'm just curious from your perspective, where do you think this is going for them?
And do you think this is a good move for them to be spending so much money on this?
Well, look, I'm far from a meta-apologist. You know, Facebook fired me from Oculus.
So I'm not naturally biased towards them or any means. But I think that the second word you
used is better than the first. It was an investment. You know, they didn't lose $13 billion.
They turned $13 billion into a variety of existing products and services, but more importantly,
into investment in future products and services.
Most of that money is not going into things that you can see today.
It's going into things that you're going to see in the future.
As for whether I think that they should be doing that, as for whether I think it makes sense,
look, again, I'm a VR nutter.
Everyone's talking about the Metaverse now, like it's this kind of newly hyped idea.
But my email signature was for 10 years, see you in the Metaverse.
That's actually how I signed the open letter that I wrote when Facebook acquired Oculus.
So I believed in this idea of a digital parallel world that exists alongside our own,
blending the real and the virtual seamlessly through your daily life as something that is,
if not inevitable, at least very, very likely to be the last form of computing aside from telepathy.
now they're making missteps like horizon horizon worlds is is absolutely terrible
it's a that that that is that that's not the metaverse that anyone wants I don't
but I also don't think that's the metaverse that even people like Mark Zuckerberg want
you know I don't think that there's a false a false recognition that it's a great product
I don't think that they are playing themselves I think they understand that they have made
some missteps there and want it to be better I do think in the long run it's it's hard to say if
it's going to be one of these bigger companies that figures out the first really kind of
compelling metaverse player, or if it's going to be a smaller player.
Honestly, my bet is that a smaller player will figure it out first just because there's so many
different approaches.
And it's very hard as a large company to pick one approach that will outcompete every one of
the myriad of other approaches that are all coming at you from all angles.
And it's hard to predict exactly what people will really want.
If I had to make a bet right now, I'd say it's something that's more like VIA,
our chat than Facebook Horizons on the software side.
And so when you say that they're investing in what the future is going to look like,
what is that?
I mean,
do you have any indication of like what all that money might be going toward?
Well,
I mean,
I think there's two,
there's two big parts of the future.
There's hardware and there's software.
And software is definitely important because without really good content or,
not even this like content,
that makes it sound like you're just watching movies,
but without good use cases,
you know,
being co-present with somebody working on something or virtually merging two different meeting
rooms in the real world or maybe one is virtual one that's real. There's obviously a lot of great
stuff on the gaming side. There's a lot of great stuff on the training side. But I think the other
thing that you have a lot of this money going into is the hardware side. Hardware, as they say,
is hard. And it's very expensive to live on the bleeding edge, especially when you have multiple
efforts to try and build VR headsets and kind of reprojected AR headsets and optically transparent
air headsets. None of those things are going to be ready for quite some time. And the hardware
that we have today, it's good for enthusiasts, but the hardware we have today is not going
to be something that the majority of people want to use, certainly not as part of their everyday life
for hours a day. I wrote a article in my blog called Free Isn't Cheap Enough. And my general thesis,
was that cutting costs on VR at this point is only going to get you limited gains.
Because while it might sell more to kind of these niche gamer uses, if you want to go big,
it doesn't matter how cheap the current hardware is.
Even if it was literally free, if Oculus Quest 2 was literally free today, you gave it to every
American, I think the majority of people would not continue to use it every day after the first
week or so.
And that's if it's free.
That's if you're giving it away.
So how can you hope to get them to pay?
for and continue to invest in the ecosystem. So there's kind of a certain bar you need to get to
in terms of quality, in terms of display resolution, in terms of comfort, honestly, in terms of
the way that it looks, because I don't care how I look when I'm using VR headsets. Other people do
seem to care. I think Apple's going to have a really big impact on that, because in addition to
the hardware advancements that they've been making, I think they're going to make VR headsets
cool simply by making sure that all the rich and famous people are wearing one. And, you know,
when Beyonce is wearing something, and when, you know, Kanye is wearing something, you
know, that, that, that says something to people beyond just the techno heads.
By the way, like thinking about the timeline, I think is important here, right?
Because they are moving what seems like fairly slowly to everybody on the outside.
And okay, yeah, you have the Oculus and people are using it for game.
But like you mentioned, Horizon Worlds, not very great.
this idea of me meta is a social company right the whole idea was to do the social thing
in virtual reality and no one wants to do that i mean horizon world just filled with like little
kids shouting shouting boobs like that's what it is and so you're they're sort of delayed on the
software there there's it's going to take a lot more time on the hardware even inside apple right
you remember referenced apple yep today there's a story or this week there's a story about how
engineers inside apple are protesting the fact that they're actually going to try to release this virtual
reality headset. So I'm curious, like, you have a understanding of like how much time this is going to
take. How long is it going to be until, like, we start to get to a place where it feels less like
what it feels now and more like, I guess like the black mirror example where you put like one of
these chips on the side of your temple and the next thing you know, you're transported into a world
that you can't tell the difference between that and the one you're living it. Well, that's what
everyone wants, right? The Matrix. They want that perfect level experience. And actually the end of that
article I wrote, free isn't cheap enough. I laid out all the people who claim they don't want
VR today, you know, they don't want VR. I say, well, what if, what if you could put on a pair of
sunglasses that cost $99 and it felt like you were in the Matrix and you could do anything you
imagined, anything an AI generated for you, you know, anything the human could, a human could ever
experience you could experience. People like, oh yeah, you know, that, that I would want. I say, well,
that, that's VR, you know, that's where it's going in maybe not the next two years, but there's
actually a pretty clear path to VR that is visually indistinguishable from reality in the next
five to seven years, certainly not more than 10. And when I say visually indistinguishable
for all the display nerds in the audience, I know there's people thinking, no, there's edge
cases that'll never really be easy to simulate. What about ultra high brightness glints off
of fine Vernier resolution, you know, lines? It's like, okay, yeah, that's going to be really,
really tough, but you'll probably be able to make something that replicates most visual
experiences, or at least certainly the visual experience of being indoors in a room or inside
of a convention center or inside of an arcade. That you'll be able to do basically perfectly well
within 10 years for sure. There's a clear path to doing it. The physics don't preclude it.
This isn't like AI where nobody knew how it was going to develop. We know exactly what the
roadmap is to get there. As far as the other
senses. That's where things get more difficult. You know, if you want to feel like you're in the
matrix, you've got to simulate touch and taste and scent and vestibular, your vestibular senses,
your inner ear telling you that you're actually moving through space, accelerating and
decelerating. That's going to be a lot tougher. There are promising schemes for each of those,
but it's more like AI was five years ago where people didn't quite know exactly what the winning
approach was going to be. And it could be another 10, 20, 30 years before someone has
the breakthrough, which could just be going directly to the brain or the peripheral nervous
system. I hope not for practical reasons. So briefly, what is that path that's going to get us
to basically visual fidelity within five to seven years? Well, the interesting thing about
VR versus AR is that the path is very, very clear because it's mostly just a brute force solution.
People have been trying to come up with really, really fancy scanning laser displays and project
directly onto the retina. People have come up with ways to inlay high resolution displays over
lower resolution displays. So you get wider field of view and also, you know, high PPI right in the
center. And a lot of these schemes make sense in the moment. But I always tell people to think the way that
I did when I first got into VR as a hobby. I got into VR as a hobby because I looked at my eight
monitor gaming rig set up and said, what's the next step? You know, or not what's the next step? What's the
what's the culmination of this? What's the final step? And I knew that it wasn't 10 monitors or
16 monitors. It was clear that it was going to be virtual reality. That's what got me into VR,
that it's kind of the final step in gaming technology, not just the next step. And I encourage
people to think about displays the same way. The way I look at it is what's the final step? Where
is this all going? Forget about the intermediary layers, like trying to merge high resolution
and low resolution displays. The answer is probably just, we're going to brute force this by having
really high resolution displays. We're going to have obscenely high resolution displays. We're going to
have very, very fancy kind of multi-material holographic, holographic optics that are able to
basically collimate and project photons off those screens with a very, very high degree of realism.
And that, I mean, that's where this is all going. People often seem to think that there's got to be
this big breakthrough, like that we're going to be scanning lasers. I'm like, no, I think
I think actually things are going to look pretty similar.
You're going to have micro LED displays that are very high brightness.
You're going to have something like pancake optics or holographic optics or
multi-material, high refractive index material stamped optics.
And you're going to put those optics in front of the display.
And you're going to be more or less done.
Now, there's a lot more to it.
You also have to have verifocal systems that can change the focal length of the pixels
that you're looking at.
But that's all been demonstrated.
That's all been proven as well.
We can do it fast enough if we have good eye tracking.
We can correct for pupil swim with eye tracking.
We can correct for asymmetry in terms of eye position in a lens with eye tracking.
It's all figured out.
Right.
Okay.
So then the question comes, all right, when does this become mass adopted?
Now, if you end up in a place where you literally feel like you're in the matrix when you put on, you know, VR glasses or chip or whatever it is, that does seem like something that's going to be mass adopted.
But the visual fidelity like you say is very interesting because it's not.
not going to bring us all the way there, going back to this, like, you can give it away for free
and it's not going to get mass adoption. Maybe we'll get a little bit more then. But it does feel
like there's almost this step change moment where you end up, like, getting good enough at those
other sentences where you do feel like you're in another universe and then people take hold of
it. Question is, if that's 30 years away, right, potentially, then can Medi keep investing this
amount of money, right? Because then it really is going to feel like losing it. I think there's no
problem. I mean, so the visual side is going to happen. The audio side is, again, barring these
these kind of extreme edge cases that are going to be very hard to simulate. The audio side of it is
going to happen. And you don't have to simulate taste well or at all. You don't have to simulate
touch well or at all. You don't have to simulate even locomotion well or at all if there are
compelling things you can do with just sight and sound. And remember, most of what we experience
is sight and sound. That's what most of our brain is dedicated to. I mean, what if you could
hallucinate anything in the real world? And when I say hallucinating, I mean, functional hallucinations,
not holograms, but, you know, full on hallucination of, you know, people in spaces, things in spaces,
being in a different space. I think there's a huge number of things that you can see me.
I mean, think about all of the times that you've ever had to do business travel. And I've lamented
this before, but I'll travel to the other side of the world. I'll go to some company and then I'll
sit in a room just so I can talk with people and, you know, look at a physical model of
something. We'll talk for a few hours in a fluorescent lit conference room. We'll shake hands and then
I have to fly home. And think of all the fuel that I'm burning to do that. Think of all the
money I'm spending to do that. Think of all the time I wasted to do that. The only thing that I would
lose if I did that in VR with perfect vision and perfect audio is that handshake at the end. I'm willing
to lose it. Yes. Okay. So that brings up another really interesting point, which is when you can
create any experience, right? Where do you focus? And, you know, I know we're like kind of
talking about meta here, but like they have a recent commercial let's up that says like the
Metaverse can be a museum. It can be a doctor operating on someone. Maybe it could be business.
It can be someone fighting fires. Right, exactly. And it's just very interesting because it's a social
company, right? And I keep going back to this. A surgeon will be able to practice as many times as
needed in the Metaverse before laying her hands on a real patient.
Being able to explore a spacecraft is one thing, but the Metaverse will let us go farther than
any rocket can take us.
Maybe that's why it doesn't feel normal to anyone right now or it doesn't feel, I don't
know, maybe not anyone, but there's definitely a large segment of the population that's just
like this is not living up to the amount of money you're putting into it because it feels
relatively unfocused.
Yeah. So what is, you know, and it feels enterprise also, which is really interesting. And it's like, oh, you're going to go enterprise, but you're a consumer company. So is that like, am I appreciating how big of a challenge it is to focus on this stuff? And where do you think the right places to focus are?
Well, I'm not in charge anymore, but I feel like as long as the hardware is not mainstream, you have to focus on the non-mainstream people. You can't force people to care about something that isn't yet at a stage where it's for them. So,
I've seen these commercials and it'll show, you know, like a stay-at-home mom who is using it to
exercise and see her relatives. There are some moms out there that'll do that. But for every
dollar you spend on customer acquisition costs to get that type of customer, you probably could
have spent 10 cents and gotten a hardcore gamer, a techno head, some guy who desperately needed this
to solve some problem in his business. If it were me, that's what I'd be looking at. Who are the
people who need this technology today. Some of them because they're interested in it from a
recreational perspective, but others because they actually desperately need this technology for,
let's say, training or real estate visualization. And those things are not as fun. They don't
market as well. You don't want to say, check out this cool tool we're using. Corporations can
use it to make more money more efficiently. But those are the users that will buy it today,
that you can convince to buy it today. If you can convince them that it'll save them money or make
them money, they kind of have to buy it. And then I would say focus on these other groups of people
after you've actually built the underpinning. I'm not a fan of trying to sell to kind of the
Starbucks crowd when the Mountain Dew crowd is the only one that cares about VR. Right. And then like
the thing is like, yeah, you might become IBM, right? Because like you're like, you know, this company
with like a, that used to be storied that now you're selling business solutions. And then maybe over
Well, it's worth noting that it's worth noting that, you know, when I, when I was, when I was running Oculus, and when I said, not even when I was running it went right at the very beginning, the very start of all this, you know, our Kickstarter video started out with me saying, I'm Palmer Lucky. I'm the founder of Oculus, the designer of the Rift. The first virtual reality headset designed specifically for gaming. And that was, that was the focus. It kind of really set our mission. It set our target audience. It set our developer audience. And it was easy for people to understand.
If we had started out saying, hey, you know, this is, this is the headset that changes everything for everyone all the time everywhere.
I don't think we would have gotten very far.
And I think that it's like that that shift from focus to broad, I think that there is a time when it will happen.
I don't think today is the right time, unfortunately.
But on the other hand, I look, I empathize a little bit with where Facebook or meta is.
corporately. I mean, they're investing a lot of money in this, as you've pointed out a few times,
they have to answer to the public markets. So they can't, for example, be spending all this money
and not explain where it's going and why. They can't just say, they can't say it's only for
gamers or it's specifically for these niche enterprise use cases and then spend $13 billion
because it just doesn't spreadsheet out. It only works as an investment if you believe in this
long-term vision, of the metaverse, of the final computational platform, of the ultimate interface,
which was what it was called even as a hypothetical by Ivan Sutherland, even before it was a working
technology. And if you believe in that, then the investment makes sense. If you don't, it's
nonsense. Yeah, and it's more than just a communication issue for them. I mean, it does literally
look like the product for them is just kind of all over the place. Is that if you, let's just play
the hypothetical game if you were there today is that sort of what i mean yeah i'm seeing you smiling here
so i'm kind of curious like is that what you would change is the the focus thing or what yeah what would
be your plan for because it's the biggest going to be the biggest investment the company makes so i'm
i'm not close enough to the problem look at the end of day i'm focused i'm focused on my new company
and because of that i don't know everything that they're having to factor in you know i know
they're having to market to shareholders i know they're having to market to partners as well
who are going to be important for me as forward.
They want to work with a lot of other companies.
They have to convince them that they should work with Facebook and to do that.
They have to set a broad vision, not a niche vision.
So there's a lot of constraints they're working under that I didn't have to work under.
And so that made my job easy, their job harder.
One idea, I'm not saying this is what I would do.
So those are all my caveats.
One thing that Magic Leap did really well was creating a sense of mystique around what they
were doing.
You know, really talking about it in big terms.
about how it was going to change everything.
We're building the magic verse.
Imagine blending the reel in the virtual worlds.
But they did a great job of not actually showing, you know, how the sausage was made.
If they, back in the day, in the 80s and 90s, the people who were most excited about VR
were the ones who hadn't tried VR.
They'd seen the movies.
They had read the articles, but they hadn't seen just how primitive it was.
Because anyone who had realized it was nowhere close to anything like they had wanted it
or imagine it to be. I think Magic Leap tapped into the same thing. Basically, the people who
hadn't seen it were the ones who were most excited. They got enough people to see these very high-end
internal prototypes that they had credible insiders saying, oh, I can't tell you about it, but I've
seen it and it's absolutely incredible. I sometimes wonder if that might have been the better play
on this metaverse front to basically tell the public, hey, we're investing in this. And it's going to be
amazing to build some internal prototypes that don't, where you don't have to spend all this money
supporting and building this thing like Facebook Horizons in real time.
Like building in the open is hard and it's embarrassing.
I love it because I'm an open source software and hardware guy.
I, you know, we built Oculus in the open.
We did all these Kickstarter updates.
We talked about what we were doing.
We open source a lot of what we did.
But it's embarrassing to work in the open sometimes because people get to see the problems
with what you're doing.
And, you know, not to be too mean to the media as if such a thing was possible.
But I, I, I, they're out for blood with.
to try and make what they're doing look terrible.
And so anything they do in the open
is going to be ripped apart and picked apart.
And I sometimes wonder if maybe they could have taken
the Magic Leap approach and kept the mystique
focused on just building the right long-term thing
without the distraction of catering to the near-term fires
of this screenshot looks bad, this image posts bad,
avatars don't have legs and people are making fun of it.
You get what I'm saying.
Yeah.
And by the way, like, after all that, where did Magic Leap go?
Enterprise.
So that is true.
Focus.
Although, although I'd say the reason they ended up doing that, I think was more a reality of, like, they basically botched their hardware launch.
They botched their software launch.
But most importantly, they overpromised.
I think Magic Leap was at its best when they were non-specific hyping themselves up.
But then when they started to say, we're building, we're building photonic light chips.
that create full holographic light fields.
It's like, no, no, this is just a normal display
and two wave guides with two focus points.
Like, that's kind of a neat trick,
but this is not a photonic lightship.
This is not a, this is not building light fields.
It's, it's just, the problem is what they built
did not live up to the hype that they had set.
But they also were not spending $13 billion a year.
So, you know, if you could build something
that lives up to the hype, then I think that could be the right approach.
I mean, Apple's doing this too.
Like, Apple.
Yeah, I was going to ask you.
You got to talk about Apple.
Apple's kept things pretty mysterious.
And I know quite a bit.
I'm not going to say too much about what I know because it wouldn't be appropriate.
But I feel like we're going to be underwhelmed by whatever Apple puts out.
Oh, I wouldn't be so sure.
You don't think so?
Okay.
I think that the hardware is going to be great.
I will admit, you know, the Apple headset, it's a, you know, it's a reprojected, it's a reprojected, it's a reprojected augmented reality headset, which, by the way, I believe is the future.
Without a doubt, it is the only power.
it is the only path to better than human vision.
Real world photons are overrated.
So what does that mean like that your images are reflected against the mirror which
then bounce on a lens or something else?
Everything you see is a synthetic photon.
So sensors are capturing the real world.
They're merging it with a digital, you know, virtual side.
And then every photon that hits your eye is created by the device.
Rather than trying to build an optical system that allows for passage of real world
photons while simultaneously layering on synthetic photons. And that's, that's a very, that's a,
the latter, optically transparent AR is a path that has been so widely explored. I think it's a
dead end. It's a great hack in the near term because it gives you real world fidelity in terms
of focus and convergence and resolution of the real world. But I, I think it's a dead end in the long
run. So I think Apple is actually, this headset is a step down the right path for the long run.
and I think probably if people are going to be underwhelmed,
it'll be on the software side,
which I think is actually also fine.
I mean, remember when,
remember that Steve Jobs didn't want to originally open up
the iPhone to external developers?
And actually one of the people who convinced him otherwise
was the former CTO of Oculus that I hired John Carmack.
And he got into a fight with Jobs and Johnny I've
and a few others over this.
And of course, in the end, John wanted to basically start porting high performance games to the iPhone, which couldn't be done as web apps effectively, which was kind of the vision for how people would run things on the iPhone that were not the default apps.
I think you're going to see a similar situation to the early days of the iOS App Store, where there's very little content, the killer apps are kind of the Apple apps that come with it.
And it's just going to take a whole development cycle.
There's not that many people that have Apple headsets right now.
There's a few people making content, but it's not like Oculus where we sold 55,000 developer kits of DK1 and overall 150,000 DK2 development kits.
In that case, every indie developer in the world who wanted one had one and was able to be ready for our consumer launch.
The Apple one, because so few people have them, I think it's going to take a year or two before people feel like the software side is really there.
But the hardware is going to impress people.
Okay.
It impressed me.
Well, so you've used it.
I've used things that are, I've used things that are, that are not quite what it will be, but are better than what it was.
And so, you know, I, I, I can't say I've used the final, the final device, but based on everything that I have seen, it's going to be great.
Is Tim Cook going to make Mark Zuckerberg look bad again?
I mean, part of the reason why I think Zuckerberg is so into the Oculus and, and metaverse thing is he finally wants to create an operating system of his own and not be sort of, sort of,
living on borrowed time from Apple.
So I'm kind of curious.
Well, I mean, this is an idea.
I mean, that's not even a, that's not a conspiracy theory.
It was kind of explicitly stated when they bought Oculus.
If you go back and look at the, at the shareholder calls back then, it was very clearly said,
like, look, we, we kind of missed, we kind of missed the boat on, on mobile.
You know, we are basically living on top of these other ecosystems.
And it's valuable for us to have an ecosystem that we have control over.
where nobody can kind of pull the rug out from under us.
And that's true just in general.
It's good to have a platform that you control.
And it's especially good if that's the final platform that will define the way that
humans interact with technology for the next hundred years.
Like that's the real win.
You know, you don't want to have the, you know, the flash, the own the flash and the pan
thing that's only going to last for a couple of years.
And so that that was one of the things that made Facebook attractive to us when they
bought us.
People thought it was a really strange bedfellow.
But you have to remember, companies like Microsoft, Google, even Apple, VR wasn't on their roadmap, certainly not their 10-year or long-term roadmap.
It was just an interesting thing going on.
So even if they would have bought us, it would have been to use us as a gimmick to sell game consoles or as an interesting thing that would inevitably get canceled like all of their other projects, you know, depending on the company.
You can probably attach them if you put your mind to it.
But Facebook was the one that had a strong incentive to take what we were doing, build it up,
turn it into the next computational mega platform, and shake up the entire tech world by kind of
you're putting, making mobile, making traditional web and making normal computing obsolete.
Microsoft doesn't really want to do that, or at least didn't, did not 10 years ago.
And I'd say Apple didn't want to do that 10 years ago.
And Google didn't want to do that 10 years ago.
They're already kind of on top.
There's no reason for them to, to, you know, shake things around and reorder who's on top.
But Facebook was a company that clearly had a strong reason to invest in VR.
And that's why you've seen them consistently investing for a decade now.
Yeah.
And I guess my question with them is always like, are they going to be so invested in this that like they might delusionally keep putting money into it?
But I think that you've made clear that this is the right bet for them.
So we'll see how it plays out.
If it isn't, I'm going to be there till the.
very bitter end because I look I work in defense now I am still a total believer in virtual reality
augmented reality the metaverse the whole thing you know it's a it's a quasi-religious fervor
that I maintain for I guess the last 15 years palmer lucky is here with us he is the founder
of aculus also the founder founder of andrel which we haven't spoken about yet but we will on
the other side of this break so stay tuned we'll be back right after this
Hey everyone, let me tell you about The Hustle Daily Show, a podcast filled with business, tech news, and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news.
Now they have a daily podcast called The Hustle Daily Show, where their team of writers break down the biggest business headlines in 15 minutes or less and explain why you should care about them.
So, search for The Hustle Daily Show and your favorite podcast app.
like the one you're using right now.
And we're back here on the big technology podcast with Palmer Lucky.
He's the founder of Oculus and the founder of Andrel.
And actually, like, I mean, we definitely went long about virtual reality in the first half of
this show.
And I'm glad we did.
But I'm actually even more excited to speak with you about what you're doing inside
Anderl.
So I think the public perception of Anderl is that it's a company that uses AI to develop
military technology.
How correct is that?
That's more or less correct, although I think that the structure of the company is as important as the output of the company.
We think of ourselves not as a defense contractor, but a defense product company.
And what that means is we use our own money to decide what to develop, how to develop it, when it's done, and then we sell it to our customers as a working product.
When we go to our customers, we're not going to them with, you know, just a white paper and asking them to put, to give us a bunch of money.
money to make something that we don't put our own money into. We're going to them with
prototypes and products that we've built and say, hey, we've already built this. We've already
taken the risk out of it. You just need to buy it and deploy it and get it out there. The risk is
on us, not on taxpayers. And that's a really important distinction because most companies in
this space are defense contractors. They work on cost plus contracts where they get paid for their
time and their materials and then a fixed percentage of profit on top of that. Usually a very
a very small percentage of profit.
And so the only way for them to make money is for their systems to be as expensive as
possible, for them to be as exquisite as possible, for those contracts to go on as long
as possible.
In fact, they're incentivized in many cases to drag things out and for them to go for very
long periods of time because that's how they make more money.
And that's a really bad set of incentives that we've tried to short circuit.
And I think that it's led to us being much more efficient internally.
It's led to us being much more efficient in our manufacturing.
and it's led to us getting out there and moving much more quickly because we're not waiting
for the government to dole out a million dollars here, a million dollars there over the course of
five years to research something. Instead, we're saying, you know what, we're going to spend
10 million of our own dollars. We're going to do this in three months and then we're going to
start shipping it. And we've done that multiple times where we've developed products over the course
of months, not years, and then replaced incumbents that have been doing what we're doing
worse for decades. Right. And so you guys are working on what the future of warfare is
going to look like and people have said, okay, well, maybe this will be a future that we'll
see sometime in the future, but obviously you see the future in the future, but maybe the future
is the present right now. In Ukraine, it does seem like you have much more visibility into this
than I do, but that the future of warfare, one where we have autonomy and AI, we have these
killer drones that are basically flying into, you know, their intelligent missiles effectively
that Russia has been using. That's all in action. So how is the future of warfare changing in
Ukraine right now? Well, I have to start by saying the future of warfare is warfare. So I know that
sounds really tautological, like it's obvious, but remember that before Ukraine, there were a lot
of people who said the future of warfare is trade agreements. The future of warfare is global
trade. This idea that we lived at the end of history was very popular, you know, that everything's
kind of firmed up and crystallized and there's not going to be any more large-scale
conflict. Everyone agreed. That was even, I mean, that was in the 90s, right? And then that that argument
went to, went to flames after 9-11 here in the U.S. What's funny is it's this seductive argument that
keeps coming back with the intellectuals and the elites who don't actually have to interact with
the worst of human nature. In fact, I think it was in 1903, the best selling book in the United
States. I wish I could remember the name offhand. It's been about a year since I talked about this.
But in 1903, the best-selling book, according to the New York Times, was a treatise on economics
that specifically laid out why we're living at the end of history.
And more specifically, said that for the first time in history, Europe is free from violence.
And that will go forever because for the first time, Europe and all of the nations contained therein are so economically interdependent that warfare between them is unthinkable and impossible.
And then, of course, we had World War I.
And then just a couple decades later, we had World War II.
I mean, it's this seductive idea that people who are out of touch with reality keep coming back to.
And so I'd say, like, it's important to point out, like, the Russia's invasion of Ukraine just blew the lid off of that idea.
And all the people that were talking about it have very quietly stepped into the shadows and pretended that they never said anything like that.
So that's been interesting to watch.
Because, of course, that's why we started Anderol.
Because we knew that there are hostile entities out there that wish violence on others to enact their aims upon the world.
There are people who are willing to kill for their interests.
And a lot of those interests are directly opposed to the United States, directly opposed to our allies.
And I would argue directly opposed to universally applicable principles, universally applicable principles of human rights, whether it's freedom of speech, the right to self-determination, the freedom of association.
These are things that a lot of our adversaries don't believe in.
And Russia invading Ukraine, I think, has been a reality check for people.
So the future of warfare is warfare.
We started this company because we think that there is no moral high ground in leaving the most moral people with the least effective weapons.
Because at the end of the day, wars start when bad players believe that they, when they incorrectly assess the risk.
You know, people only get into wars because both sides think that they can win.
It's very, very rare for one or both sides to believe that they are going to lose a war.
They incorrectly estimate the prowess of the other side.
The best way to deter warfare is to have such an overwhelming advantage that there's no question as to the outcome.
You need people to basically look at this like a chess game and they need to look at the board
and realize that they've only got two pieces and you've got a full set.
They need to say, you know what? I can't possibly win. I need to not launch an offensive in the first place.
And if we had been in a better place, not just the United States, but our partners, I think that things would, I think things would be very different in Ukraine. They would be very different in Taiwan. Maybe they'd even be different in Hong Kong.
Right. And so, well, let's just talk about this because I feel like it's worth going into.
I guess like the counter argument to this is that the United States isn't really at risk of ground invasion, like the same way that so.
Not at all.
So what is that?
So then talk about, I mean, if your compelling event was to deter others from attacking the United States and we don't think that there's going to be a ground invasion.
And so then what is this company doing that?
I think I've heard this argument a few times.
I think there's two angles to it.
one, there's never going to be a ground invasion of the United States because everyone in the
United States has a lot of guns and loves America. I mean, there's a, there's a strong sense
of patriotism that doesn't necessarily exist in every other country. And we've got literally
hundreds of millions of guns and at least, at least, you know, maybe a hundred million people
who are capable of using them. So for that reason alone, we're not going to see a ground invasion
of the United States. The other reason is because we are actually in pretty far away.
That was my second point. Geostrategically, we're in a great position. We've got friends on all sides of us. We're an ocean away from everybody who wants to do us harm. We're a really hard target to get to and fight. But I would say this is a little, like you're reversing cause and effect here a little bit. One of the reasons that the United States has gotten into this kind of leadership role with NATO and this leadership role in five eyes, it's not just that we're economically powerful. It's
because we are that kind of unassailable, you know, kingdom on the mount that nobody's going
to be able to go after.
Like, think how dangerous it would be for the kind of world police nation to be right
next to China or right next to Russia, where you can have major conflict, potentially
wipe them out, and then the rest of the world is in trouble, where free trade is not necessarily
a given.
I'd say basically our populace and our geostrategic location have been.
made us the, have made the United States into what we are over time, which is the country
that is, I will say, it's basically the country that's trying to uphold this kind of idea of free
trade, self-determination, democracy on the round of the world. Now, have we done a perfect job of
it? Absolutely not. But generally, I mean, that's what we're trying to do. That's what Europe
wants us to do. That's what Japan wants us to do. That's what Korea wants us to do. You know,
it's a, it's a pretty good thing that's worked out so well so far. I think the big change you're
going to see going forward. And this has really been because of Ukraine. I think the United States
within our lifetimes is not going to get boots on the ground in a big way in major conflicts. I think
the kind of Afghanistan in Iraq days of tens or hundreds of thousands. I think those are great
lessons. I think we learn, you know what, the power of the United States is not the ability for
us to send a bunch of our people to another country to die for it. I think what we're seeing,
seeing with Ukraine is we can be very effective, taking people who care about their country,
who are partners of the United States, and arming them with the tools they need to make them
so prickly that nobody wants to step on them.
All of our partners, they don't want to take over the world.
They want to be prickly porcupines that nobody else can step on.
So, like, you look at Taiwan, you look at Japan, you look at Korea, you look at Poland,
you look at NATO, you look at the Philippines.
None of these countries have ambitions to take over the world.
they're great partners for us to give weapons to that they can use to deter aggression from
China, Russia, Iran, other people who are up and coming.
And I think artificial intelligence is going to lead to some very unexpected up-and-comers.
And I think that's what U.S. assistance is going to look like.
It's going to be providing very high-end weapons to people who are ready to go and die for their
country, not us going to die for their country.
I see.
So basically what you're saying is when the stuff that you're developing at Anderil mostly
is for countries who might be under threat from a China or a Russia to be able to defend themselves.
That's what all of it is for.
That's it. Everything that we build is from a lens of deterrence. And that's actually a really
different way to think about it than has typically happened, typically happened with companies
like Anderall. So, you know, I often talk about how the right time to get involved with
defense, if you don't want wars to happen, the right time to get involved is before the war starts.
if you kind of have this come to Jesus after a major conflict starts, it's already too late.
You're not going to be able to build anything fast enough to prevent the invasion because it already
happened. All you can do is try to push weapons into the oven to make the conflict end as quickly
as possible. And that's what we're seeing with Ukraine. I think we did not give them the tools
they needed to prevent an invasion. And so we're basically limited to giving them tools they can use
to fight a war. And what Andrew is doing broadly is thinking, okay, what
tools would you build if you were trying to build things that you would get them before an
invasion happened? What are the tools you can build that are going to be operable on day one of the
war, which is when you have all your runways and ports? But more importantly, day 10, day 100,
day 1,000. How can you build things that will remain operable even in a sustained military
campaign against that country? Because those are the tools that are going to deter China and
Russia, because they're not that afraid of, let's say, you know, things like long range,
long-range surveillance drones that have to operate off of 5,000-foot runways because they know
they're going to bomb those runways in the first day or the first week of the war.
What they're terrified of is things that can be operated, you know, vertical takeoff and landing aircraft
that can operate out of an abandoned gas station parking lot.
So, you know, some warehouse out in the hills, you know, kind of spread across a whole country.
They're worried about weapons systems that are covert, that are hidden, that are almost
impossible to find and devastating when they work on you.
I mean, those are the types of systems that deter warfare instead of wind warfare.
So there's also an argument that's been made, and I definitely want to get into the technology,
but, you know, the United States has a pretty mixed record in terms of intervention in the past,
let's say, you know, 50 years or so.
And there's an argument to be made that even by prolonging the war in Ukraine, what's happening
is it's driving Russia and China closer together and putting more distance between the U.S. and these countries.
Maybe that's worth it.
I'm curious what you think about it.
I think we're already so far apart ideologically and interest-wise that I don't think
that we're going to come to terms with China.
I mean, China has a very strong set of interests that are absolutely counter-opposed to
the United States and in our allies.
I mean, like, you could make the same argument with a lot of other places.
Like, you could say, oh, by helping Taiwan, you know, maintain their independence and making
sure that we have access to their chips.
aren't we bringing China and Russia and Iran closer together?
I don't know if you're familiar with the SEO, the strategic cooperative organization,
but it's Russia, it's Iran, it's China.
Now Turkey is talking about joining, which is absolutely nuts and a discussion for another day.
They were just applying for NATO like five minutes ago.
So it's kind of a crazy.
Yeah, exactly.
It's a crazy situation.
But setting aside SEO, sorry, it's not strategic.
It's a Shanghai cooperative organization.
That's what it's called.
And it's kind of just counter NATO is the idea.
And so I think that China and Russia, they actually don't have interests that diverge.
And so it's actually pretty cheap for them to agree with each other.
You know, they don't want the same places.
Russia doesn't want to invade the Philippines.
Russia doesn't want to own the South China Sea.
There's just not a lot of overlapping interest.
So, yeah, we probably are pushing them close together.
And we probably are pushing ourselves closer apart.
But I think that process started decades ago when we allowed China to kind of enter into our free trade deals in a way that was really ignorant of what they would do with that.
Okay. Interesting. So what are you developing? What is the technology that Andrew is working on?
I mean, fundamentally, our main product is a piece of AI software called lattice. It's an AI sensor fusion communication and analysis platform that can,
take data from hundreds or thousands of different sources, merge them all into one comprehensive
picture of everything that's going on in an area, and then tell what machines to do what,
to get the right information to the right people at the right time. And it's really the underpinning
of all the hardware products that we make. So we make military-based security towers that run on top
of lattice. We make border security tools that run on top of lattice. We build aerial drones,
multiple ones that run on top of lattice.
We build counter drone interceptor systems that knock drones out of the sky,
jam them, hack them, and physically destroy them,
also running on top of lattice.
We build loitering munitions that are built on top of lattice.
We build robotic submarines that dive to a depth of 6,000 meters that run on lattice.
And I think actually our submarines are the longest range electric vehicles of any kind,
anywhere in the world.
And all of these things are built together.
And it's also worth noting lattice is not just a tool for our own hardware.
we actually integrated more external systems that the DOD already owns than internal products.
So we're integrated with manned fighter jets, with cruise missile early warning systems,
with counter-air systems, with radar systems, with electronic warfare systems, with naval ships.
Across the board, we're trying to tie all these disparate legacy hardware products together into a single picture
so that every sensor can be a sensor for every effector and every person has active.
to each node.
So I think one of the things that when people hear about this stuff, the concern is that,
okay, there's going to be, you know, this advanced technology built into things like
submarines, fighter jets, and eventually, and, you know, drone interceptors, and eventually
we're going to get into this world where we're going to have, you know, so much distance
between people in warfare, and we're going to have robots fighting each other, and eventually
robots, the robots that win, fighting against human populations.
So, I mean, that's the doomsday scenario that a lot of people talk about.
You don't seem very concerned about it, though.
Maybe you are.
I'm curious what your perspective is there.
I mean, we're already there.
I've heard this argument,
and I think that it probably made a lot more sense
when we went from guys shoving spears into each other
to doing it from hundreds of yards away with bows.
Like, I think that was probably the right time
where you could get away from the physicality and brutality of what you were doing.
And every advancement there has taken it further and further.
And so when people say, oh, but you know,
isn't this just making it more inhuman?
Doesn't the distance, you know,
really make it, make it less human, they're calibrated on what's normal for them.
And I think that actually speaks to the fact that people are able to understand abstract concepts
like killing a person, even when they're not directly doing it literally hand to hand.
I mean, you know, people are like, oh, well, you know, today at least the guy needs to look at the
guy in his rifle scope and pull the trigger and see him.
And of course, that is brutal and it gives people PTSD.
But, you know, if he's doing it from a drone thousands of miles away, surely it's just so
remote but then you look at a lot of those drone operators they get out they get PTSD as well yeah and
that was exactly my point it turns and that's right it turns out people can people can
understand this in an abstract way i have faith in the human in the in the in you know the human race
in this one aspect i don't think that not being directly there uh is is i don't think it's
born out and so like if the autonomy this the autonomy thing though is not even that you're not
you're not you're not even that you're not even i mean the robots are making the decisions right
That's what people are worried about.
People are worried about it, but it's just not reflective of U.S. policy.
And it's just not going to happen.
So, I mean, by certain definitions, we already have autonomous weapons.
You know, the close-in weapons systems that shoot down sea-skimming missiles going at aircraft carriers,
you turn those on and they shoot anything that comes rushing over the horizon without a person having to give it, you know, kind of the pull of a trigger.
And the rules there are, everyone who's running those systems knows exactly how they work,
exactly what their limitations are and can only activate them in that way when there is a really
good reason to do so. A human is always accountable for that decision. The same thing with,
for example, radiation-seeking missiles, which we've been using since before Vietnam, where you
can basically send a missile into an area where you're going to lose communications with it because
it's being jammed. It finds a radio signature that looks like a target that we know, like a tank
or a surface-air missile launcher, and it flies into that. In that case, it's making a decision about which
target to strike when exactly to do it. But there's a person on the hook for the deployment
of that system. He's the one who launched it. He told it where to go. He knows what the
limitations are. He knows how it can be tricked. And if it goes poorly, if something bad
happens, we don't just say, oh, the machine made a bad decision. It killed the wrong person.
No people are at fault. You know, we have a strong actually system for accountability where it's
always a person, as they say, on the loop. A person is always making that decision to kill. I don't
think that we're going to get away from that anytime soon. And I'd also say, like, even with current
drones, we say, oh, well, you know, the guy's doing it, but it's still autonomous. But I mean,
you look at a predator drone. What happens is they now have algorithms and they have for a long time
where it finds the target in the view, it locks onto that target. It steers the laser
autonomously. The missile flies autonomously. Nobody's steering it. You know, it just flies to the
target. Basically, the only human involvement there is making the decision to end someone's life.
And that is a decision that still causes people PTSD, which I think speaks to the fact.
that they fully recognize, however removed it is from shoving a spear into somebody,
that they're killing someone. And I think the United States has done a pretty good job of holding
accountability on that. Not perfect, but better than, better than anyone else.
So I want to ask one more questions about, about how this ends up.
By the way, I'll, I just have to digress for one moment. I will say, I actually am,
I actually am very sympathetic to that general argument that,
war. People say that the future war is going to be more divorced from the reality of it. I think
that's not the case because people think in an abstract way. That said, I think there's an argument
that it should be more physical, that we should return to whence it came. I'm generally a fan of
the idea that world leaders should just have trial by combat and they have to show up with knives
and they fight until one of them is dead. Well, it would be good if the people at the very top
started to feel some of the consequences for the decisions they made. And that's really what I'm getting
too. Like, I think, I think when you talk about distance, the distance is not the guys who are
pulling the trigger. I mean, these guys are, right. They have an immense, an immense moral
load on them. And I, I've talked to them. I mean, we hire a lot of them. Anderall's about 30%
veterans. And these guys are not removed from the things that they've had to do. You know,
who is removed from it? A lot of the people who are making the decision to get into these fights
in the first place. They're not, they're, they're, they're, they're, they're, they're, they're, they're, they're, they're, they're, what was
last time you heard of a congressperson getting PTSD because of a bill they passed that funded
some specific military action. It's just, it doesn't have it. No, they get, they'll get reelected
after making votes like to invade Iraq and stuff like that. Many of them are still in the
Senate. Yeah. I'm, I'm a, I'm a big fan of, of heads of state being in war. That's actually
one of the things I really like about Prince Harry, you know, he, that is that he, he, he, he, he, he, he, he, he, he, he, he, he, he, he, he, he, he, he, he, he, he, he, he, he, he, he, he, he, he, he's been in the shit. He understood
that. Kings used to have to ride into battle. I think that was a good thing. Yeah. So, what about, then, this stuff
ending up in neighborhoods in the United States. So, we have, like, a big spillover from military technology.
They all, they all, almost always ends up in the hands of police in some way. Right. We have police riding, like,
armed personnel carriers through the towns of the United States right now.
And like when I hear about AI-based weapons or, you know, smart weapons as a deterrent
for others, you know, who might want to invade our allies, you know, around the world or,
you know, I think that's good.
But then I also think, like, come on, inevitably this stuff is going to show up in communities
around the United States and end up militarizing our police even further than they have been at
this point.
What do you think about that?
So you and I probably are going to disagree on this.
I think that it's everyone who's.
concerned about, for example, APC is going to police is either one purposely fearmongering
or more likely just doesn't understand what they're buying them for at all. Like,
it makes an easy headline, right? Why does this little shitty podunk one stoplight town need
a, you know, a military vehicle, you know, basic, or as they usually report it, a tank?
You know, they say, why does the police department need a tank? And if you actually read the
reasons that they get these things, it actually makes a lot of sense. So, like, let's go
through that specific example, because it's easy to talk broad, but like, I'll talk about that
specific one. What happened with a lot of these vehicles is they were things where we, they, they fought
in war. They were, you know, U.S. Army or U.S. Marine Corps vehicles. And then we basically built
them to fight a conflict. And at the end of the conflict, we didn't need them anymore. So we had a few
options. Either one, we could just literally cut them up into scrap metal, which would actually cost
quite a bit of money. Like, cutting apart these things and disposing of them in an environmentally
responsible way is no, is actually no joke. I mean, that's ridiculous. I'd want to be able to
buy one of those things like a used car dealer and use it for personal use. I mean, driving around
Brooklyn, one of those things, I have to say would be pretty cool. So, you know, it's actually
worth noting that there's been a lot of rules passed that forced the government to surplus off
equipment like that. So you actually can buy a lot of the stuff surplus. And actually, I own an armored
personnel carrier. I also own a Humvee. I also own a UH-6-A-Black Hawk. Now we're talking.
And I own a Bark 5 Special Operations Craft. I bought from Naval Special Warfare. So I'm very familiar
with the procurement rules. But here's the idea. You don't want to cut them apart because
that's too expensive. You can try to sell them. But it turns out there's not much of a market
for these vehicles. You can't, like people don't actually want these things. There's not a lot of
use for them. And so what happened is a lot of police departments would need to buy things for
their SWAT teams and for their, like even medical first responders that were maybe not a giant
armored, you know, anti, you know, let's say an MRAP, you know, a mind resistant, you know,
kind of transport vehicle. But they were out there saying, hey, we need a vehicle that we can use
an active shooter scenario so that we can get people close to a school, let's say, so that they can
get out and go into it or any, you know, active shooter situation anywhere, without necessarily
having to cross 100 yards to a place where there's people in the way.
windows. They said, we need a place where we can evacuate people. We need to be able to get people
into it and then move it out. Some of these are also places where they actually needed vehicles that
were able to ford pretty deep water. They said, we need a vehicle that can, for example,
cross that road over there that whenever we go through flood season, there's six feet of water.
Our trucks can't make it through. We're literally cut off from that side of the town.
And it turns out that things like heavy, amphibious, capable personnel carriers met all
of those requirements. And so the government was basically said, look, we're going to get rid of
this stuff. But there's all these local law enforcement departments that if we don't give,
if we don't, if we don't sell these tools to them for basically the same price we would sell them
on the surplus market, which is honestly giving them away compared to the original cost,
they're going to have to go out and they're going to have to buy something for millions of
dollars that does the same job. So when people look at how police departments are using some of
these tools, like people say, oh my God, I can't believe they got an MRAP. It's like, well,
they didn't buy an MRAP because they're using it to, you know, go on like, you know, urban patrol missions.
These things mostly, if you look what they do with them, they just sit in a hangar, like a fire truck.
They just sit somewhere, maintained, and then when they have a flood, they have a vehicle that can amphibiously cross to the other side of town.
When they have an active shooter situation, they're able to get people close to another place.
When they need to pull something, when they need to haul a truck, let's say a semi that has gone off and, you know, into some crazy ditch.
and they need something that can do a serious recovery.
Guess what?
These vehicles are also used for vehicle recovery.
They have tons of torque construction.
Anyway, so let me go ahead.
I guess I guess I'd tap this off.
I've gone deep on this one nuclear issue.
I think that there's a lot of fear mongering about militarization of the police.
And I think that distracts from the handful of issues that probably are more militarization
of the police, which is really not the equipment they're buying.
It's the tactics.
Mentality.
Yeah, it's basically, are we outfitting our police to be people who walk the street?
and you know are we are we are we building um you know are we building the police from leave it to beaver
or are we building the police from uh snow crash you know are these basically soldiers you know are
these soldier mentality police and but they play into each other though that's the thing so but
let me ask you this i want to ask you just i'll ask a different way so so that that could be
true and like people like oh but if you give them an umrap but here's the thing i've seen what
they do they buy these mwraps they paint them bright red you know they really treat them
like fire or emergency vehicles. People say, oh, it's still an MRAP. You're going to meet their
police mentality. And I would say, I don't think the right way to deal with a town, let's see
a small town in Mississippi that has a problem with the cops thinking their soldiers, the right
way to get them back into the right mindset is not to deprive them of the tools that they need to
help people in flood zones or to tow vehicles out of ditches. Say, oh, you can't have that kind of
hardware. You have to be incapable and running around on your own two feet. I think you, you,
We need to know how to give them the tools that they need and also solve the mentality problem.
Otherwise, we're not really solving anything.
Yeah.
Let me ask you this.
Are you as Andrew going to sell to police departments in the United States?
Well, I mean, we already sell to law enforcement in the sense that, you know, Customs and Border Protection is a law enforcement agency.
So they're definitely not military.
We sell to DHS, the Department of Homeland Security.
We haven't done any local law enforcement sales, not because we are.
you know, ideologically opposed to ever doing so, but it would need to be the right thing.
And the things we build, I mean, I talked about this earlier, the things we build are primarily
focused at deterring warfare. I mean, that's, that, that's really kind of the fundamental
use case for most of these. They're not the types of things that are typically useful for a local
law enforcement, uh, yeah, local law enforcement type application. Now, if something came up,
again, I don't want to preclude ever selling to a specific customer, especially if the U.S.
government told us that we needed to do so. If the U.S. government, for example, said,
hey, we really think that you should be selling some of your counter drone systems to local
counties so that they can protect critical infrastructure like power substations, power plants,
oil refining facilities. I wouldn't say, oh, no, we refuse to do that because we don't want
to sell to law enforcement. Yeah, I think at the end of the day, there has to be some trust
in our democratic systems. I think a very dangerous outcome of this idea of kind of tech
CEOs deciding who has what and how in the realm of defense technology is that you end up in a
situation where you have mega corporate executives having de facto authority over US foreign and
domestic policy. You know, to basically be able to pull the strings and say, this war is okay,
but this war is not. I'll sell to you for this, but not that. I'll let you defend this,
but not that. I think that's a very dangerous thing to allow companies to decide. I don't think I
should really even have that ability. Honestly, I wish I could say,
sell to anybody the U.S. government tells me to, and I have literally zero say over it.
Unfortunately, I do, I do have some say that the government does, is not fully, has not
fully, I've not been able to fully defer responsibility to them. But I'd say that, that's what is
with law enforcement. If we could, like, also, when Andrel started, I just have to, sorry,
one more detour on this. I want to know, one of our first two products we worked on was a
firefighting tool. It was the century firefighting tank. The other product we worked on was
the Anderle century tower. And, uh, unfortunately, it was a failure from a business
perspective. It was basically a tracked vehicle. It was a tank that could carry several tons of
firefighting foam or water. It was amphibious. It was tracked. It looked a lot like an armored
personnel carrier. We actually based the design on an M1113 armored personnel carrier. And it could
autonomously fight fires right in the middle of a fire where you would never put a manned vehicle
where it's far too dangerous to keep a fire crew. As a diesel electric hybrid could operate in areas
too hot and too oxygen starve to ever even run most fire.
vehicles. It was a business failure, but that was something that we would be selling, maybe not to
law enforcement, but to, you know, to fire, to firefighters. And in a lot of communities, those are
the same thing. There's a lot of communities where, where their kind of first responders are
cross-trained in both things. They can't afford to have a fire department and an EMS crew and
also, you know, a police station. And so like, I happily would have sold those to people. No
problem. I wouldn't sell them loitering munitions, mostly because they don't want to.
want them and they don't want them and don't need them. Yeah. Okay. Do you have time for like maybe two
more questions or do you have to roll? Let's do it. I know we're a little over. Um, what I saw you,
you did like a public prayer at like a conference recently. Are you religious? Uh, I, I am a religious
person. Uh-huh. So I grew up religious too. I'm curious like how your religious background,
if any, you know, influences, if any, um, the way that you think about war.
I think that the strongest influence is, you know, it's hard to, it's hard to do a one-to-one
comparison because, you know, the United States is, is not Israel, it's not a lot of the other
nations that are kind of more specifically talked about in the theory of theories of war that
are, that are in the Bible. But I think that it is pretty clear, at least from a, from a
Christian perspective, that war is sometimes justified. And I think that's actually true of
most religions. There's very few religions that would say that violence against others is
never, ever justified, especially when it's for something that is morally good. And whether or not
you think the United States is, you know, is a Christian nation now or at some point in the past,
the principles that we have are definitely founded on similar values. I'd say most radical atheists
actually share maybe 99% moral overlap with Christians in terms of like, hey, you generally
shouldn't kill people. But it's also good to stand up for the week.
It's also good to not allow injustice to be perpetrated.
I mean, you generally should try to do the right thing in those areas.
I think that that's a good thing that the United States has general agreement on those because that's not the case in a lot of other nations where, you know, people talk about religious differences in the United States.
But again, most of the people in the United States, regardless of their religion, do generally align with U.S. interests.
as it pertains to making sure that NATO does not fall to Russia with regards to making sure that
Taiwan does not fall to fall to China.
So I'd say I'd say that the nice thing is, however it motivates me religiously, I think is
actually more or less aligned with the people who have absolutely no religious justification
for their moral beliefs.
Yeah.
Okay.
Here's the last one for you.
It's two-parter based a little bit off the last question.
question. So obviously like you're working in artificial intelligence. First part of this is
what do you think about like humans trying to create new intelligence? You know, it seems that's
almost like playing the role of God. It's a, you know, so that's, I'm kind of curious how you view
that. And then there's the other side of it is you created an AI defense company. Based in reality,
do you think people's fears are that like AGI could like take over some of the systems you're developing and
kind of on its own start going to war.
Well, let's work backwards.
Look, it's a valid concern, but to be honest, for me, it's so far down the list of other
valid concerns.
I am so much more terrified of moderately intelligent, very morally bad people doing bad things
than AGI doing really bad things.
I'm actually a lot more worried about bad people with.
dumb AI than good people with really, really, really good AI. I'm even more worried about,
you know, bad people with bad AI than really good AI with really good AI. You know, I feel like
people say, oh, you know, what if it, what if it wants to exterminate us? What if it turns out that
it's hyper-intelligent and it's going to take over? I've seen those scenarios. I think they could exist.
I think at the end of the day, we don't need to be worried about the AI nearly as much as people
using AI as a tool to enact totally human perversions on the rest of the world. It's going to be
religious extremists who decide that they're going to use this to exterminate the people of
some other religious sect. It's going to be the people who are a rogue state that decide they're
going to be able to use AI to settle a war that has been brewing for hundreds, maybe thousands
of years between them in a rival country. I mean, it's going to be bad people with OK AI that I'm
much more worried about than AI itself, if that makes sense.
And when I say AI directly, I don't mean AI.
So when I mean AI in the hands of bad people, I don't mean AI directly.
I mean the things they'll create with it.
For example, biological weapons could become much easier to build and tailor through the existence.
Right now that we have the AI decoding all of these proteins and stuff like that.
Exactly.
And of course, that's not just AI.
Relevant are all these other technological advances that have made biotech so much easier for colleges and garage hackers to also work on.
So, you know, there's always two sides of the coin, but I would say the idea that the idea that a rogue state could build a custom virus tailored to wipe out some specific sub-ethnicity of their population, or at least try to do such a thing.
They may not be successful, and that's terrifying.
You know, this is how bad sci-fi movies start.
You know, they're trying to wipe out the bad guys from their perspective, and it turns it out, it wipes out everybody and turns everybody into zombies.
But like, I'd say that the idea that they could do that 10 years ago, 20 years ago,
it was kind of unbelievable.
It would have to be a crazy, you know, superpower effort to do something like that.
Now, I think it's believable that in the next 20 years,
you could have the smallest nations in the world doing something like that or trying.
So the only reason I'm not worried about AI killing us all of its own volition is that
there's way scarier things even without AI and certainly with,
dumb AI. It's like that saying that it's not like a person that's not, AI is not going to take
your job. It's a person with AI is going to take your job. It's like AI is not going to kill you a person
with AI might. Exactly. And I, and I don't think it has to be very smart AI. It doesn't need to be
self-aware to do what a bad person wants. And in fact, there's actually a compelling argument that
maybe a smart AI would be better at not doing what the bad person wants. So, you know, I, we'll see how it
plays out. Yeah. The last question was asking you about, you know, how, yeah. So, there.
you go. I will say the things that we're doing are so functional and mission focused that it
just doesn't meet the bar of playing God. But not, not Andro, just I'm talking about in general,
this human pursuit of AGI. Are you familiar with the, with the science fiction concept of uplift?
It sounds familiar, but yeah, definitely unpack it. It was a really popular concept in the 70s and the 80s.
There were some examples in the early 90s in science fiction.
And it was a term that basically got started getting broad use, almost as if it were a real term of the industry, an industry that never existed.
And the idea of uplift is to take species that are either below or right at the brink of what we would consider human sentience and bringing them over the line.
That's the process of up.
Now, maybe you can bring it further and you make them more intelligent or super intelligent.
hyper-intelligent even, but the concept to uplift is about that step from where they don't
realize that they are a individual with, you know, a level of sentience and future planning
to all of a sudden being that even at a low level. And there's, typically the candidate species
are, you know, apes. So you've got chimpanzees. You've got gorillas. There's a lot of,
a lot of science fiction about dolphins getting uplifted because dolphins are very similar to human
brains in a lot of ways, very high glucose consumption, about the same, about the same ratio between
the different regions. And they are already quite intelligent. There's even people who think
that if you could get enough brain folding to go on, you might be able to do it in African gray
parrots, which are already very smart, but they need a little more brain mass. They need a little
more brain folding before you could probably get them just over the line. And you might be familiar
with, I think his name's Alex the parrot. Have you ever read his story? There's been a handful of
African grays that were clearly a cut above the rest and were able to learn quite a bit of
language, not just in a mimicking sense, but in a way where they could put words together,
limited sets of them. And they were even, they were clearly self-aware to the point they could
ask questions, not just saying things that are a request for other things. Lots of animals
have been able to do that, but actually asking questions about the future or a thing that
needed to occur before they could get some food or be brought somewhere. And it was barely right
about there. And so I guess what I'm getting with this is uplift has always been a really
interesting concept to me. It's unfortunate that it has died. I think mostly because AI has taken
all the oxygen out of the room in terms of what it means to play God, what it means to do really,
really big things in the realm of consciousness and intelligence. I've always wished that there was
more focus on uplift because I think AI has been focused on the way people think. And I think
there's actually probably a lot to be learned if we could learn more about the way that the closest
intelligences that are non-human thing. Could it be that there's really good approaches in the way
that a dolphin or a chimpanzee or a parrot or even an octopus thing? Octopus in particular,
it's very alien. It's a very different process. Modeling things after ourselves, I think we've
done it because we think we're right at the pinnacle. But us being where we are now doesn't mean that
our thought structure is necessarily reflective of the optimal thought structure.
It's not necessarily the one that will scale the furthest.
A common trope of uplift science fiction is that dolphins are extraordinarily gifted in three
dimensional related geometry and that they're able to get to a much higher intuitive level than
people are.
The ultimate shape rotators, if you will.
And I've always thought that was a really interesting idea, whether it's true or not.
So are we playing God with AI?
Maybe a little bit, but I actually wish we were playing God more.
I wish we were just straight up, you know, hauling other species up into sentience.
Let's, like, humans have done pretty well.
Let's bring some other guys along for the ride.
I think, you know, it can't hurt to have a little bit more, I guess, neurodiversity would be the popular term these days.
You know, if you think an autist is neurodivergent, how about a dolphin or an ape?
But I think it's probably interesting things that can be learned.
Yeah, it would be cool to find out.
That's your next company?
You know, it's on the long list.
To be honest, I have a lot of things that I've looked at working on.
And one of them was solving obesity through petroleum foods.
Another was solving the private prison problem in the United States by running a non-profit private prison chain that out-competes all the others by setting the incentives where it only gets paid once the person has not gone back to prison after release, fought for five years.
Basically, realign the incentive so that everyone is forced to, you know, build the right systems that we actually want.
Because the incentives are just like the defense industry flipped today.
Those are still pretty high on my list if I were not doing Anderol.
But, I mean, there's, there are so many problems in the world.
It's like, until we achieve that old saw about world peace globally, I think I'm going to be focused on this.
I'll also say, Mike Salana has an interesting counter perspective to me that I've been.
noodling on uplift. His point is that the world is better when it's a, when it's a unipolar world,
meaning you kind of have one center of power that's able to keep things in check. We've been in a
unipolar world for the last 30, 40 years, I'd say, since the end of the Cold War. China has
started to turn it into a bipolar world, and you can imagine it turning into a tripolar world
at some point. And that really breeds a lot more conflict, especially existential, you know,
kind of species level potential conflict.
And his argument is you uplift more species that have totally radically different needs,
wants, and desires, and interests in us.
You could end up in kind of a mini polar world where you've got a lot of different species
with very different interests in terms of what they want the climate to be,
what resources they need.
And that's a, and no kind of kinship of man that we have, at least, with every person
around the world.
And that's rarely turned out well for most species.
that have to compete with another species for resources.
So I've been pondering this idea.
I'm actually maybe more afraid of planet of the apes than Terminator, if that makes sense.
Definitely.
And those animals are strong.
So all right, Palmer, thank you so much.
Really appreciate you joining.
Thank you.
There's been a lot of fun.
Awesome.
All right.
That'll do it for us here on Big Technology Podcast.
Thank you so much, Palmer, for joining.
It's great speaking with you.
Thanks to all of you for listening.
Wow, back-to-back founders.
who sold their companies to Facebook.
I guess we're having a pattern here.
We had Kevin Sistram last week, Palmer, this week.
Who knows what next week will bring?
I can tell you, in the meantime, on Friday,
Aaron Griffith from the New York Times is joining to recap the week's news
and we'll have plenty to talk about.
So stay tuned for that.
Thank you, Nate Watney, for handling our audio.
Thank you, LinkedIn, for having me as part of your podcast network.
And thanks, once again, to all of you, the listeners,
wouldn't be here without you.
All right, that'll do it for us here,
and we will see you next week on Big Technology Podcast.
Thank you.