Shawn Ryan Show - #208 Alexandr Wang - CEO, Scale AI
Episode Date: June 12, 2025Alex Wang is the CEO and co-founder of Scale AI, a leading data platform accelerating the development of artificial intelligence applications. Founded in 2016, Scale AI provides high-quality training ...data for AI models, serving clients like OpenAI, Microsoft, and the U.S. Department of Defense. A former software engineering prodigy, Wang dropped out of MIT to build Scale AI, which is now valued at over $13 billion. Recognized on Forbes’ 30 Under 30 and TIME’s 100 Most Influential People in AI, Wang is a prominent voice in shaping the future of AI innovation and deployment. He advocates for responsible AI development and policies to ensure ethical and secure AI advancements. Shawn Ryan Show Sponsors: https://www.roka.com - USE CODE SRS https://www.americanfinancing.net/srs NMLS 182334, nmlsconsumeraccess.org https://www.tryarmra.com/srs https://www.betterhelp.com/srs This episode is sponsored by better help. Give online therapy a try at betterhelp.com/srs and get on your way to being your best self. https://www.shawnlikesgold.com https://www.lumen.me/srs https://www.patriotmobile.com/srs https://www.rocketmoney.com/srs https://www.shopify.com/srs https://trueclassic.com/srs Upgrade your wardrobe and save on @trueclassic at trueclassic.com/srs! #trueclassicpod Alex Wang Links: Website - https://scale.com Scale AI X - https://x.com/scale_ai Alex X - https://x.com/alexandr_wang LI - https://www.linkedin.com/company/scaleai Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
We've all seen it. The Department of War is operating in a world that's changing faster than ever.
That's why so many guests on my show talk about the importance of continued innovation and technology in the military.
But here's the problem. Working with the Department of War can be complex.
For many companies, the process isn't always transparent. It's hard to know who the right stakeholders are, where the decisions are made, or how funding actually move.
Even when you get the right conversation started, you often hear the same responses.
We like the technology, but funding is already allocated.
Or check back next fiscal year.
That leaves a lot of capable teams with strong products, but no clear path forward.
That's where my friends at SBIR advisors come in.
They've built a team of over 60 former acquisition officers who spent their careers inside that black box.
They help you find the right buyers in the Department of War, find the money, and write winning proposals so you can get on the right contract and fast.
Since 2020, they've helped small businesses like yours win over $600 million in government contracts.
And they're a 100% veteran team dedicated to one thing, getting the best technology into the hands of the people who need it the most, the warfighters.
If you're serious about selling to the Department of War, go to sbiradvisors.com.
That's sbiridvisors.com.
And if you mention my name, you'll get the first month free.
Alex Wang, welcome to the show, man.
Yeah, thanks for having me.
I'm excited.
So am I.
Like, I was telling you at breakfast.
I don't know a whole lot about tech, but ever since Joe came on,
I've been trying to wrap my head around it all.
And it's just fascinating subject.
I love talking about this subject now.
So thank you for coming.
Well, it's becoming so critical to national security
and all the stuff that you're very passionate about.
So, I mean, I think fundamentally tech is like,
we got to get it right.
Otherwise, stuff gets really dangerous.
Yeah, yeah, scares this shit out of me.
In fact, we were just having a conversation downstairs
about you having kids and you're waiting.
And Neurrelink came up.
and I had to pause the conversation.
Dude, I'm like, I'm worried about Neurilink,
but it sounds like you're pretty gung-ho about it.
So, yeah, a few things.
So, yeah, I mean, what I mentioned is basically,
I want to wait to have kids until we figure out how Neurrelink
or other, it's called brain computer interfaces,
so other ways for brains to interlink with a computer
until they start working.
Because, so there's a,
There's a few reasons for this. First is in your first like seven years of life, your brain is more
neuroplastic than at any other point in your life, like by by an order of magnitude. So there have
been examples where, you know, for example, if somebody, if a kid is born, like you have a
newborn that has, let's say they have cataracts in their eyes so they can't see through the cataracts.
and then they live their first seven years of their life with those cataracts.
And then you have them removed when they're like eight or nine.
Then even with those removed, they're not going to learn how to see
because it's so important in those first seven years of your development
that you're able to see that your brain can like learn how to read the signals coming off of your eyes.
And if you, if that's not, if you don't have that until you're like eight or nine,
that you won't learn how to see.
So because it's so important that your neuroplasty is so high in that early stage of life,
I think when we get Neurilink and we get these other technologies, kids who are born with them
are going to learn how to use them in like crazy, crazy ways.
Like it'll be actually like a part of their brain in a way that it'll never be true for an adult
who gets like a Neurlink or whatever hooked into their brain.
So that's why to wait.
Now, Neurrelink as a concept or like hooking your brain up to a computer, I kind of take a pragmatic view on this, which is, you know, my day job, I work on AI.
I believe a lot in AI.
I think AI is going to continue becoming smart and smarter, more and more capable, more and more powerful.
AI is going to continue being able to do more and more and more and more.
We're going to have robots.
We're going to have other forms for that AI to take over time.
and so, and humans, we're only evolving at a certain rate.
Like, humans are, you know, we are, humans will get smarter over time.
It's just on the timescale of like millions of years because natural selection and
evolution is really slow.
I don't know.
Are we getting smarter?
I don't know about recently, but a little setback.
Yeah, a little blip.
So if you play this forward, right, like you're going to have AIs that are going to continue
getting smarter, continue improving.
Like, they're going to keep improving really quickly.
And, you know, biology is going to improve only so fast.
And so what we need at some point is the ability to tap into AI ourselves.
Like, we're going to need to bring biological life alongside all of the silicon-based
or artificial intelligence.
And we're going to want to be able to tap into that for our own sake, for humanity's sake.
And so eventually, I think we're going to need some interlink or hookup between our brains directly to AI and the internet and all these things.
And it is potentially dangerous and it's potentially, to your point, terrifying and scary.
But we just are going to have to do it.
Like, AI is going to go like this.
Humans are going to improve at a much slower rate.
And we're going to need to hook into that capability.
I mean, what you know that I've already experienced.
express fear in this. And so I'm
curious, without sharing my own fears,
I'm just curious, like, what,
in your mind, what could go wrong?
I mean, there's like,
the obvious thing is that some corporation
hacks your brain, well, it's a corporation
hacks your brain, which,
even that's pretty bad, but that'll be like, what?
They'll, like, send ads directly
to your brain, or they'll, like,
make it so that you want to buy their products or whatnot.
But then even worse, obviously,
a, you know,
foreign actor, a terrorist,
an adversary, a state actor, you know, hacks into your brain and takes your memories or takes,
you know, like manipulates you or all these things. I mean, that's is, that's obviously pretty
bad. Yeah. And I think that's, like, it's definitely a huge risk. I mean, for sure,
if you have a direct link into someone's brain and you have the ability to, like,
read their memories, control their thoughts, read their thoughts.
Like, you know, that's pretty bad.
I've talked to a lot of scientists in this space and a lot of people working on this stuff,
including the folks at Neurlink.
And, you know, mind reading and mind control are like, those are the,
that is where the technology will go over time, right?
And so it is like, it's something that we have to, you know, like any advanced technology,
We have to not fuck that up.
But it's going to be pretty critical if we want humans to remain relevant as AI keeps getting better.
I mean, I interviewed Andrew Huberman.
Do you know who that is?
Yeah, yeah, yeah.
And talk to Ben Carson about it, too, as kind of a follow-on discussion.
But what Hummerman was telling me is that – because this whole thing is – it sounds like it's –
I don't know a whole lot about Durnalink,
but from what I've gathered,
it's going to help the blind see,
and it sounds like it helps with some connectivity
in your joints and bones and stuff
for people that are paralyzed.
But something that Huberman brought up
is that I was like, well, if it is going to help the blind see,
then could they project a total false reality
into your head, meaning you're seeing,
who knows what, shit in the skies, everywhere.
Sounds like they could recreate an entire false reality.
He said, yes, they will have that ability, but not only will they have that ability,
they can manipulate every one of your senses, touch, smell, taste, insert emotions into
your brain, fear, whatever it is.
And I was like, holy shit, like they could manipulate your entire reality into a false
reality. I mean, you think that's, and then I asked Dr. Ben Carson about it, and he said, you know,
who's a world-renowned neurosurgeon, he said, yes, absolutely. He goes, or, you know, they could
use it for good, but he goes, which he kind of put it on me. He's like, well, what do you think
would happen? And, like, would it be used for good eventually, or would it be used for evil?
And I mean, what are your thoughts on that? You think that's a real possibility?
I mean, yeah. So, first of all, like, we don't understand.
the brain too much today, but eventually we will.
Like, science is going to solve this problem, right?
And everything you just mentioned is ultimately going to be on the table, you know, manipulating
your emotions, manipulating your senses.
The senses thing is already happening where I think in monkeys, they've shown that, like,
they can, you know, they don't know what it's like from the monkey's perspective, but they're
able to project, like, on a grid of a monkey.
and get them to like like click on the right button really reliably so they they
somehow they hook into basically the neural circuits that are doing the visual
processing visual like image processing in the brain and they're able to
project like things into their into their vision such that the monkey will like
always click the button that you want it to collect to click and then you know you
give it you know a treat or something damn and so yeah manipulating vision
manipulating your senses, manipulating your emotions.
This will be longer term, but like leveraging your memories,
manipulating your memories, manipulating, like, those are, that stuff is on the table.
The other stuff that is, I think, more exciting is like being able to hook into AI.
And like, all of a sudden, I have encyclopedic knowledge about everything.
And just like, you know, in chat, GPT or other AI systems do,
I can think at superhuman speeds.
I can, all of a sudden, I can like, I have, like, way more information I can process.
Like, I can, like, understand everything that's going on in the world, then process that instantaneously.
Like, I think there's an element here where it'll legitimately turn a superhuman from a just cognitive standpoint.
But then to your point, like, the flip side of that is the risk the other way, which is that you're going to have.
have, it's a huge attack vector.
Yeah.
I mean, like I said, I'm not super tech, but your company's Scale AI.
You basically, correct me if I'm wrong.
Scale AI is basically the database that the AI uses to come up with its answers and answer your prompts and all of that, correct?
Yeah, so we do a few things.
So we help large companies and governments deploy safe and secure advanced AI systems.
We help with basically every step of the process,
but the first thing that we were known for, and we've done very well,
is exactly what you're saying,
which is creating large-scale data sets and creating data foundry is what we call.
But creating the large-scale data production that goes into fueling every single one of the major AI models.
And if you ask questions in chat GPT, you know,
that question, you know, it's able to answer a lot of those questions well because of data that we're able to provide it. And as AI gets more and more advanced, you know, we're continually fueling more advanced scientific, advanced information and data into those models. And then we also work with, you know, the largest, you know, enterprises and governments like the DOD and other agency in the U.S. to deploy and build full AI systems, leveraging their own data.
Our strategy as a company has been, you know, how do we focus on, we have, how do we focus on a small number of customers who, where we can have like a really big impact?
So we work with the number one bank.
We work with the number one pharma company, the number one healthcare system, the number one telco, the number one country, America.
And we work with all of them to like, how can you know kidding take how you are operating today and take sort of the workflows that you're doing today or.
the operations that you have today,
and use AI to fundamentally transform them.
So if you're like the largest healthcare system in the world,
how do you, and you have to provide care to all of these patients,
you know, millions of patients,
how do you do so in the most effective manner?
How do you do it logistically better?
How do you improve your diagnoses?
How do you improve the overall health outcomes of all of your patients?
Like that's a problem that we help solve with them.
Or for the DOD, you know, there's so much that we can do
to operate more efficiently and ultimately in a more automated way.
I mean, you'll know this, I think, better than anyone.
And so how do you start implementing those systems with AI?
Well, Dave, way more on the weeds than that later in the interview.
I kind of where I was going with this was, so originally it was feeding the AI.
You're given the data center.
You're given the data to the AI to come up.
up with the answers and answer the prompts.
And so where I was going is if you have neuralink in your head and it's accessing your data
centers, how easy would it be to just feed bullshit into the data center that then feeds
everybody that has a neural link in their head?
So it could be, I mean, it could be anything.
I mean, here's an example.
I'm a Christian.
A lot of people think that AI is going to manipulate the Bible and change a lot of things.
And so how easy would it be to just feed that into the AI data center?
And then that's the new, whatever you feed it, that becomes the new truth,
because that's what everybody's accessing is that specific data.
Yeah, I mean, I think A, yes, for sure, that's a huge risk.
And this is one of the reasons why I think it's really important.
important that US or other democratic countries lead on AI versus the CCP, like the Chinese
Communist Party, or Russia or other autocratic countries, because the potential to utilize, even
AI today, by the way, you can use it to propagandize to a dramatic degree. But yeah, once you get
towards, you know, you have Neurrelink or other brain computer interfaces that are, that can
directly, you know, insert thoughts into people's brains.
I mean, it's extreme power that has never existed before.
And so who governs that power?
Who governs that technology?
Who makes sure that, you know, it's used for the right purposes?
Those are like some of the most important societal questions that we'll have to deal with.
Man, I mean, where do you even start with that?
Who do you trust to control your fucking mind?
Yeah, I mean, I think
Well, it's interesting
I think the one thing that I think
Has been
I think a lot of people
Kind of understand it now
And we were talking a little bit about this at breakfast
Is like even the degree to which
Even just general media today
Kind of controls your mind
Or controls the like opinions you have
Or the beliefs you have
And you know
You know we were talking about like
You know
It does
does the media prop up certain military forces to make them seem far more fearsome than they actually are?
And like, you know, there's like some low grade, you can kind of view like some low grade forms of like, you know,
propaganda manipulation.
All that stuff is like happening like, let's say like on a scale of one to ten at the one or two level today.
And then once you have neuralink or other devices, it's going to be like a nine or a ten.
and I think it's really hard.
I mean, I think I don't think any country is prepared
to govern technology as powerful as a technology
that we're going to be developing
over the next few decades.
Like, AI, I don't know if we're prepared.
Brain computer interfaces I don't know for prepared.
Large-scale robotics, I don't know if we're prepared.
These are technologies that are just so much more powerful
than anything that has come before.
Sometimes people will say, like, you know, AI is the new mobile.
It will be as big as mobile phones.
And it's just, no, it's going to be like a thousand times bigger and more important and, like, more impactful.
And it's not clear that we did the best job regulating mobile phones even.
So there's, it's going to be, it's going to be really important that we get it right.
Yeah.
I mean, everybody that gets what I mean, you could, you could basically instantaneously have your,
have an entire army, an entire nation that's linked into your thoughts, your way of thinking
and manipulate that entire population to do.
Who the hell knows what?
Hopefully something for good.
But, you know, how things wind up going.
But you're gung-ho about this stuff.
Would you put it in?
I would put it in, but I would be, you know, there's a few things that need to happen
before I'd be willing to put it in.
first I would need to really feel good about the cyber offense defense posture like I need to have really good confidence that I would be able to defend from any attacks like any sort of cyber attacks into you know my brain interface um and that's like that's one big bar um and then I would need to feel pretty confident I would need to feel confident that there were um
that it wouldn't deeply alter my consciousness
in any major way.
And that I think you would see from data
of other people who use it
and you kind of get a sense
just from other people adopting it.
Those would be the two things
I would need to feel really, really confident about.
It's a big thing.
Yeah.
It's a big thing.
Well, the last thing, you know,
and then we should talk about other stuff.
But the last thing about this is,
you know, one of the things
that, you know, people are,
There's a lot of talk right now about how humans will live forever, right?
Or can humans live forever?
How do you not die?
And a lot of that's focused on keeping our human bodies healthy and keeping our, you know,
how do you take care of yourself?
How do you take care of your human body?
How do we cure diseases such that like humans can live to hundreds and hundreds of years?
But I think what's the actual end game is that we figure out how to,
upload our consciousnesses
from our meat brains
into a computer
and I kind of think about
neural ink or other
bridges between your brain and
computers as like the first step there.
Well, hold on.
There's a whole other rabbit holeing.
So you're saying that we should be able
to upload our consciousness
or you want to be able to upload our consciousness
into whatever.
Yeah, I think, I mean, now we're like,
we're on the, like, deep end of sci-fi,
but, but yeah, I mean, I think there will over time be,
there, so one, I think the technology will exist at some point.
We're not close today, right?
We barely have neuralink, you know, kind of working, right?
So we're not close, but the technology will exist
to upload your consciousness onto a computer.
Holy shit.
And then, okay, let's say we're sitting here, you know, it's like 50 years from now,
this technology exists, and you're asking the question, you know, are people going to
upload their consciousness?
Well, first off, there's a lot of people who naturally would, like people with terminal
illnesses, people near death, you know, people who are like very fringe and, you know, like,
experimenting this new technology, there will be a class of people who will just initially do it.
And then as that starts to happen and they upload their consciousness, like, if you have a
digital, you have these sort of like digital intelligences, they're, you know, that's true
immortality. That's the closest thing you'll get to true immortality.
and so the
I think it's going to become
like once the technology exists
you know when it exists
it's going to become quite
it's probably going to become
a very natural path for most humans to go down
so what do you think
what do you think happens if you get your
consciousness uploaded in
what would it even be uploaded into
like a cloud or something
yeah it'd be uploaded to a cloud
what do you think
do you think that you can experience life
by uploading your consciousness
to a cloud?
Yeah, so, yeah, this is a few things.
So first, I'm a big believer in robotics.
I think we're basically at the start of a robotics revolution,
and we're in the very early endings of it,
but people are starting to make humanoid robots.
They're going to get really, really good.
People are starting to apply them to manufacturing
and industrialization in other contexts.
I think the costs are going to come down dramatically.
And so eventually, yeah, if you would believe that if you uploaded and then you could download or downlink down to a humanoid robot, then you would kind of experience the real world like any other world.
Or you could continue in some kind of like simulated universe in, you could almost like play a video game in the cloud kind of thing.
And that could be like the other alternative.
Wow.
What do you think happens when you die?
You know, as AI has gotten...
So Elon always talks about how we're in a...
We live in a simulation, right?
And I remember when I first heard him talk about this,
I was like, no, this is like, I don't believe that.
I don't believe we're in a simulation.
But as AI has gotten better at better at simulating the world,
like I don't know if you've seen these AI video generation models,
like Sora or VO or some of these models,
but they can produce videos that are totally realistic.
Most people could not tell the difference between AI,
we're seeing this, AI-generated video and real video.
And as that's happening, it's making me think more and more that we probably live in a simulation.
No shit.
Yeah.
Let's talk about something that actually brings a lot of stress this time of year, banking.
Most of us are used to the old school banks that seem built for the 1% while they hit the rest of us with overdraft fees, monthly maintenance fees, and minimum balance requirements.
Chime is changing the way people bank.
They offer fee-free and smarter banking built.
for you. I look at what Chime is doing and think about how much my younger self would have benefited
from this. They aren't just another app. They unlock smarter banking for everyday people. We're
talking about products like MyPay, which lets you access up to $500 of your paycheck any time and
getting paid up to two days early with direct deposit. Some of those traditional banks still don't
do that. But the real game changer, right now, is that.
the new chime card. It's the cashback card that helps you build credit history with your own money.
Two things that usually don't come together. There are no annual fees, no interest, and no strings
attached. Plus, when you get qualifying direct deposits, you get 1.5% cashback on eligible
chime card purchases. It makes your everyday spending work harder by delivering real rewards
and actual financial progress. Beyond that, you're looking at a savings APY that's eight times higher
than traditional banks and five-star customer service with real humans available 24-7.
It takes just a few minutes to switch, and it's an absolute upgrade to a smarter way of managing
your money. Chime is not just smarter banking. It's the most rewarding way to bank.
Join the millions who are already banking fee-free today. It just takes a few minutes to
sign up, head to chime.com
slash sRS. That's chime
dot com slash srS.
CHIME is a financial technology company, not a bank.
Bank, banking services, a secured CHIME visa credit card and my pay line of
credit provided by the Bankor Bank NA or Stride Bank NA.
MyPay eligibility requirements apply and credit limit ranges $20 to $500.
Option. Option and products may have fees or charges. See chime.com slash
fees info. Advertised annual percentage yield with CHIM Plus status only.
Otherwise 1.000% APY applies. No min balance required.
Chime card on time payment history may have a positive impact on your credit score.
Results may vary. See chime.com for details and applicable terms.
Hey, Ontario, come on down to BentMGM Casino
and see what our newest exclusive the Price's Right Fortune Pick has to offer.
Don't miss out.
Play exciting casino games based on the iconic game show, only at BetMGM.
Check out how we've reimagined three of the show's iconic games,
like Plinkgo, Cliffhanger, and The Big Wheel into fun casino game features.
Don't forget to download the BetMGM Casino app for exclusive access
and excitement on the Price's Right For.
Fortune Pick. Pull up a seat and experience the Price is Right Fortune Pick. Only available at BedmGM Casino.
BetMGM and GameSense remind you to play responsibly. 19 plus to wager. Ontario only. Please play responsibly.
If you have questions or concerns about your gambling or someone close to you, please contact Connects Ontario.
At 1866-531, 2,600 to speak to an advisor, free of charge.
BenMGM operates pursuant to an operating agreement with Eye Gaming Ontario.
How do you just, this is already fascinating.
We haven't even got to the interview yet.
How do you think we're living in a simulation?
I mean, I know they say they cannot disprove it.
Yeah, you can't, like, it's kind of one of these things.
There's no way to prove or disprove that you live in a simulation.
And so, but it's like, it's like any, you know, afterlife thought or religious thought.
Like, all these things, they're like fundamentally,
unprovable. But the reason I think it's the case is I think in our lifetime, we are going to be able to
create simulations of reality that will be hyper-realistic. Like, I think we are going to create
the ability to simulate different versions of our world with hyper-realistic accuracy,
and that will happen over the next few decades.
And if we can, like, it's kind of like that Rick and Morty episode, where if we have the ability as an intelligent race to produce, you know, millions of simulated worlds, then the likelihood is that we're, you know, we're probably also the simulation of some other more intelligent or more capable species.
Where do you think consciousness goes right now when he die?
what if we are
what if we are
the super advanced
robotics
yeah I think
and your consciousness
gets downloaded
into another body
generation
yeah that's your own
that would be
that's something like one way
to think about it which is like
yeah it's all this big simulation
that's running
and as soon as like
you know you get
you get kind of like downloaded or like taken off or like decommissioned from you know one
entity you get like you know uploaded to another entity kind of thing um it's kind of that that's plausible
i think there's another world where like consciousness is like is
consciousness may not like be that big a deal so to speak like it could be the case that
you know i definitely as as the models have gotten better and better as they have
models have gotten better and better, you look at them and, you know, you definitely wonder if at
some point you're just going to have models that are properly conscious. And it may just be the
fact that, like, you know, it's something that can be engineered. And if it's something that can be
engineered, then all bets are off, I think. Yeah. It's pretty wild to think about. Yeah, yeah.
But let's move into the interview. You ready? Yeah. All right. Everybody starts off with
an introduction here. So here we go. Alex Wang, founder and CEO of Scale AI, a company that's
backbone of the AI revolution, providing the data and infrastructure that powers the AI revolution.
Child prodigy who grew up in Los Alamos, New Mexico, surrounded by scientists with parents
who were physicists working on military projects. Coding Wizard, who by age 15, was already
solving AI problems at CORA that stumped PhDs.
Visionary entrepreneur who dropped out of MIT at 19, turning a Y-combinator startup into a
national security powerhouse that's helping the U.S. stay ahead in the global AI race.
Youngest self-made billionaire in the world by age 24, built a company valued at nearly
$25 billion while staying laser-focused on solving the biggest bottleneck in AI high-quality data.
Unafraid to call the U.S.-China AI competition and AI war,
warning that the Chinese startups like DeepSeek are closing the gap faster than most realize.
Guided by your mission to build future where AI drives progress, security, and opportunity.
And so there's a big question right now that everybody's thinking about.
Is AI the next oil?
Yeah, I think a few thoughts there.
In some ways, yes, in some ways no.
So AI is definitely the next, some ways in which it is the next oil.
AI will fundamentally be the lifeblood of any future economy, any future military, any future government.
Like, if you play it out, you're like the degree to which a country or economy is able to utilize AI to make its economy more efficient, to automate parts of its economy, to do automated research and development, automate R&D, like, you know, push forward in science using AI.
All of that stuff is going to mean that countries that adopt AI effectively will have like, you know, nearly infinite GDP growth and countries that don't adopt it.
are going to get left behind.
So it is sort of the fuel that will power the future of every country.
And by the way, I think the same is true of hard power.
Like if you look at what the militaries of the future are going to be like or what war looks like in the future,
AI is at the core of what that is going to look like.
I'm sure we'll get into that.
And then the ways that it's not like oil.
is, you know, oil is this finite resource.
You know, we, you know, countries that stumble upon large oil reserves, they have that
large oil reserve.
At some point, it's going to run out.
Like in Norway, you know, it runs out at some point.
And so it lends the country power and economic riches for a time period.
And then you exhaust it.
And then you're looking for more oil.
Whereas AI is going to be a technology that will just keep compounding upon itself and will keep, you know, the smarter AIs, the more economic power you're going to get, which means you can build smarter AIs, which means it a more economic power and so on and so forth.
And so there's going to be a flywheel that keeps going on AI, which means that it's not going to be a time-based, a time-limited resource, let's say.
it's going to be something that will just continue racing and accelerating for the entire perpetuity.
And data is part of that.
Data is a big part of that.
Data is the core part.
Yeah.
So a lot of times actually, I like to compare data to oil versus AI.
That's actually what I meant.
I fucked that up.
I meant to say data.
Yeah, yeah.
Well, I mean, I think that's totally true.
Like data, if you think about AI, it boils down to like,
How do you make AI? Well, there's like three pieces. There's the algorithms, like the actual code that goes into the AI systems that, you know, really smart people have to write. I used to, you know, write some of these algorithms back in the day. Then there's the compute, the computational power, which boils down to large-scale data centers. You know, do you have the power to fuel them? Do you have the chips to go inside them? Like, that's like a large-scale industrial.
project in question.
And then data.
Do you have all of the lifeblood, do you have all of the data that feeds into these algorithms
that they learn off of?
And it's really kind of like the raw material for a lot of this intelligence.
And so that's why I think data is the closest thing to oil because it is what gets fed
into these algorithms, fed into the chips to make AI so powerful.
And everything we know about AI is that, you know,
the better you are at all three of these things, algorithms, computational power, data, the better your AI get.
And it's just all about racing ahead on all three of these.
So when we see like Chad GPT, GROC, these types of things, are they sharing a data center or are they completely separate data centers?
They all use, they all have separate data centers.
This is actually one of the major
lanes of competition between the companies
is who has the ability to secure more power
and build bigger data centers
because ultimately, as AI gets more and more powerful,
the question then becomes, how many AIs can you run?
So let's say for a second that we get to a really powerful AI
that can do automated cyber hacking.
So it can do like, it can log into any kind of server
or log into another, you know,
or try to hack some website or try to hack some system.
Then the question is just, okay, if I have that,
how many of those can I run?
Can I run 1,000 copies of that?
Can I run 10,000 copies of that?
Can I run 100 million copies of that?
Wow.
And that all just boils,
down to how many data centers do you have up and running. And then that boils down to, okay, how much
power do you have to fuel those data centers? How many chips do you have to run in those data centers?
And how do you keep those online for as long as possible? And what data is constantly fueling
those models to keep getting them to become better and better and better. And so this is one of
the reasons why one of the major ways that the AI companies compete, you know, between XAI, Elon's
company and Open AI and Google and Amazon and meta and all these companies.
One of the major ways they compete is just who right now is securing more power and more
real estate for data centers five years from now and six years from now.
And so the battles five to six years down the line are being fought literally today.
Wow.
Man, that's fascinating stuff.
Well, a couple more things before we get into your life story here.
Got you a gift.
Oh, man.
Everybody gets one.
Love it.
Vigalit gummy bears.
There you go.
Legal in all 50 states.
No funny business, just candy made here in the USA.
Yeah.
And then one other thing, got a Patreon account.
It's a subscription account.
It's turned into quite the community.
And they've been here with me since the beginning when I was running this.
thing out of my attic and then we moved here.
Now we're moving to a new studio and the team's 10 times bigger than what it was,
which was just me and my wife.
But it's all because of them.
And so they're the reason I get to sit here with you today.
And so one of the things I do is I offer them the opportunity to ask every guest a question.
This is from Kevin O'Malley.
With AI now able to essentially replicate so many facets of our reality,
do you see a future where all video or photographic evidence presented in trials become suspect?
Based on the ability for any of it to have been replicated through artificial intelligence tools.
Yeah, so this goes back to what we're just talking about.
I do think AI is going to enable you to do crazy levels of simulation.
And I don't think our quarts are ready for it.
I think that like the, like Kevin is saying,
AI will be able to generate very convincing video, very convincing images in a way at a lot.
Like we're not even really at that point yet.
Like right now you can still tell when these videos or images are AI generated.
That's going to keep getting better.
And it's going to be indistinguishable from real video.
How the hell are we going to discern what's real and what's AI generated?
I think that there's two things
I think first
people are going to need
really good bullshit detectors
like
insanely good
and I think
I think kids today by the way
already have much better bullshit
detectors because they grow up on the internet
where there's just so much
there's so much of everything
that they already kind of like
learn to have better and better
bullshit detectors
but
So that's one.
And then the second is, I mean, I think there's going to be
this is an area where
I know there's a lot of push for
various forms of policy and regulation,
but this is going to, I mean,
it's going to be a major question.
Like, hey, if there's fabricated
video or
imagery used
in a trial and it's
discovered that it was fabricated,
like, you know,
what are the consequences of
that. And I think it's about tuning that such that if you fabricate evidence or you fabricate
things, then, then, you know, that's maybe a worst offense, maybe that's the worst offense
of all. Then I think people would, then you deter a lot of usage of those tools then if,
if you set up the incentives in the right way. Yeah, I mean, what, you know, first thing that goes
to my mind is the U.S. government. I mean, just showing you around the studio and stuff
talking about, hey, this is what the government did to those Blackwater guys I was telling you about.
They deleted the evidence.
Well, instead of deleting the evidence, they could make new evidence that is a fake gunfight in the Sour Square, Baghdad, that proves they're guilty.
And then it's the government behind it.
You know, we've seen it with Brad Geary, we've seen it with Eddie Gallagher, we've seen it with the Blackwater guys.
We've seen it a ton just in my small network circle.
And I could, I mean, you see what's going on with the elections all over Europe.
They pulled Georgescu calling them a, what was it?
I don't know, under Russian influence, Marie Le Pen and France, done.
I mean, they were talking about pulling somebody in Germany not too long, maybe about six months ago.
and it's just, man, it's fucking crazy, you know,
and scares the hell out of me, scares the hell out of me,
because then they can just frame anybody they want.
Yeah, I think definitely one of the outcomes of AI
is that institutions that have power today
will gain way more power.
It will, it's not naturally democratizing.
It's a centralizing kind of technology.
And so, yeah, we need to build mechanisms so that we can trust those institutions.
Otherwise, it doesn't end well.
Yeah.
Well, let's get to your story.
Well, I have gifts, too.
I love gifts.
Okay, great.
So if you think, I mean, we're going to talk about this.
I grew up in Los Alamos, New Mexico.
So my parents were both physicists who worked at the national lab there.
This is the birthplace of the atomic bomb.
I don't know if you saw Oppenheimer but half of that movie set in Los Alamos where I'm from so we got a Los Alamos hat
Well Salmos National Laboratory hat dude. It's a very cool we have some
Will Salmos coins so man about the those one about the atom bomb one about the Norris Bradbury who's the lab director and then and then a little
Salman's coin about the, you know, the father of the atomic bomb.
You know?
We have a, like, a copy, like a copy of all the manual that they gave to the scientists.
That got declassified.
Oh, shit.
From the, from the actual Manhattan Project.
Wow.
And this is cool as shit.
And, uh, this one.
just a fun one. It's a
it's a rocket kit for you and kids.
Oh man, they're going to love that.
Yeah. Thank you.
Dude, thank you. This is going to look
awesome in the studio.
That's very cool.
Yeah, it's been kind of surreal. I mean,
everybody calls
AI
the next Manhattan project.
And so it's been
it's been funny because that's where I grew up.
It's like, I don't know, feels weird.
I'll bet it does.
Yeah.
about it does. So what were you into as a kid? So, yeah, so again, both my parents are physicists and my,
and my dad's dad was a physicist as well. So I grew up in this like pure physics family.
So science, technology, physics, math, these were, these were the things I was like,
I was like, I was really excited about as a kid. And, uh,
I remember like around the dinner table,
we would talk about black holes and wormholes
and, you know, alien life and supernova and, you know,
far away galaxies and all that stuff.
That stuff was all very captivating to me.
I was thinking about kind of like,
basically like, you know,
understanding the universe, so for lack of a better term.
And then I really like math.
And I realized,
kind of, you know, in about
4th grade,
I entered my very first math competition,
which is a thing.
And I, like,
it was in the whole state of New Mexico,
and I scored the best out of any fourth grader
in New Mexico,
which,
and then that, like, activated this, like,
competitive gene in me.
And then I just started, like, you know,
I got consumed by math competitions, science competitions, physics competitions.
What kind of math do you do it in fourth grade?
Yeah.
You.
Uh, how do you doing?
Yeah, yeah.
Fourth, I remember, let's see, my parents taught me algebra in, I want to say it was second grade, maybe between.
Are you serious?
Yeah.
You mastered algebra in second grade.
I don't know if I mastered it, but I was, yeah, I was playing around with algebra.
They taught me the basics of algebra
And I would just like spend all time thinking about it in second grade
It's like seven, eight years old, right?
Yeah, like eight and eight
Yeah, holy shit
And then and so by the time I was
By the time I was in fourth grade
I could do kind of like I could do some basic algebra
I could do some basic geometry stuff like that
And then
Let's see, where did I do from there
By the time I was in
middle school, I was doing calculus, and then
and I was doing college level math in middle school as well.
So those are the two things I was doing in middle school.
And then in high school, I just became obsessed with computers.
And I just spent all day programming.
And I realized, like, science and math are cool,
but with computers and programming, you could actually make stuff.
and that ended up, you know, becoming the major obsession.
Back to the dinner table conversations.
Yeah.
I mean, Los Alamos, there's like a lot of conspiracies and all kinds of stuff going on about that place.
Remote viewing, all this stuff seems to stem to Los Alamos.
But both the two parents that are physicists in Los Alamos, you guys are talking about black holes and aliens and shit.
What do you think?
Are we, are there aliens?
So there's this famous paradox, the Fermi paradox, which is, you know, what are the odds that we live in this like vast, vast, vast universe?
And there's like, you know, there's billions, hundreds of billions, trillions of other, of other stars and planets.
And, you know, what are the chances that like none of them have intelligent life?
I mean, I think, like, definitely somewhere else in our universe, there has to be intelligent life.
You think so?
For sure.
But the benefit, or I don't know if the benefit, but part of the issue is, if we're really, really, really far apart, like, millions of light years apart, like, millions of light years apart, there's no way we're ever going to communicate with each other.
We're just, like, super-duper far away from each other.
So I think that's plausible.
And then there's the, you know, there's the, what's called the dark forest hypothesis.
I think this is one of the things I actually believe the most in, probably.
So you have the Fermi paradox that says basically like, hey, what are the odds that there's no intelligent life out there in the universe?
It's probably zero.
There has to be some intelligent life somewhere else in the universe.
And then the question is like, why aren't we seeing any?
Why aren't we seeing any aliens?
Why aren't we, like, coming into contact with them?
And so then there's all these, like, how do you explain why that is?
And there was this hypothesis called the Dark Force Hypothesis, which originally came out of a sci-fi novel, actually.
But is the one that, like, jives the most with my thoughts, which is the reason you don't run into other intelligent life is if you play the game theory out, if you're an intelligent life,
you don't actually want to be like blaring to every other intelligent life that you exist.
Because if you do that, then they're just going to come and take you out.
Like you're basically like a, you become like a huge target for other forms of intelligent life.
And there's, you know, some intelligent lives out there are going to be hyper-aggressive
and are going to want to take out, you know, other forms of intelligent life.
So the dark forest hypothesis is that once you become an intelligent,
life form and you become a multi-planetary species and all that, you realize that you're kind of best
off minding your own business and not, you know, sending all these sorts of signals and trying
to like make contact with other life because it's higher risk to do that than to just kind of like,
you know, stay isolated. And so there is intelligent life out there. There are aliens out there,
but everybody's incentive is just to stay isolated. Interesting. I don't know. I used to believe
in it. Then I interviewed a bunch of guys. I don't know. I don't know. I think all this shit's a big
distraction, to be honest with you. Yeah, there's definitely, I mean, there's definitely the other
portion of this, which is, you know, UFOs are a conspiracy such that, you know, the military
can do all sorts of airborne testing and it gets discredited because, you know, people say it's
UFOs and then, and then nobody believes it. Like, there's just no, I'm, of all the people,
talk to. There's just no hard evidence.
And then it's the, well, that's classified.
It's like, I mean, is it? You're on a podcast tour.
But I don't know.
Sometimes I think, you know, this is like all I watch is the expanding, all the black
holes, all the, this is what I fall asleep to at night.
And I don't know. I mean, they found what, like, Saturn's rings are all water.
They think they may have found, you know, there's a possibility of a lot.
life on some of the moons on Saturn that would Neptune, I think, is it Neptune that's made of water?
Like a lot of oceans that are frozen and so there may have once been life.
Then there's a, they think they found a pyramid on Mars or something.
I don't know.
Sometimes I think maybe, maybe at any one given, in any particular given point in time,
there is only one planet that holds life as we know it at a time,
and then maybe when that planet becomes obsolete, everything goes extinct,
maybe it moves, you know, maybe it was Mars, I don't know,
five billion years ago, and that's where life was.
And then somehow, you know, shit changed and then it developed on Earth.
I don't know.
That's where I'm at right now.
I go back and forth on this shit all the time.
Yeah, totally.
Well, because our star has a life cycle, right?
And as it goes through that life cycle,
different points of our solar system
become different temperatures,
have different conditions,
you know, all that kind of stuff.
And so that's plausible theory.
I mean, I think it's,
I mean, I think both that
and what we're talking about before
in terms of like consciousness in the afterlife.
These are like some of the great questions
because you just, you know,
we'll probably never know the answers.
Yeah.
Yeah.
What were your parents working on at Los Alamos?
They were...
Are they still working there?
Yeah, my mom still's working.
My dad's not working, but my mom's still working.
And so they were part of the divisions in Los Alamos National Lab that worked on classified work.
That they had clearance, my mom stills clearance, with the DOE.
and I actually remember, like, when I grew up,
I just assumed they were working on cool physics research
because I was like a kid,
and I didn't put two and two together.
And so I remember when I grew up,
I thought the Los Alamos National Lab,
like used to be the place where the atomic bomb was built.
And then decades later,
is just like this like,
advanced scientific research area where they're doing research into, you know, all of the, you know, the frontier of human knowledge.
And it's just this, like, great scientific research area.
And then it wasn't until I, it wasn't until I literally got to college where I was talking to a friend about it.
And it, like, dawned on me that, oh, wait, Los Alamos probably still mostly weapons research.
And, oh, that's why you would need a clearance to work.
Stuff in New Mexico.
And then since I left, they actually restarted,
they restarted what's called nuclear pit production,
but they restart basically manufacturing the cores of nuclear weapons.
This must have been like 2018, 2019 in Los Alamos.
And then I was like, oh, yeah.
No, it's mostly a research facility to research new nuclear warheads and new nuclear weapons.
And so that had dawned on me until I was like all the way in college.
Wow.
But yeah.
So my guess is my parents worked on that.
Probably.
Yeah.
Damn.
That's crazy.
Wow.
What else were you into as a kid other than mathematics?
I loved math. I loved, I loved coding, I love science, I loved all that stuff. I, uh, I was willing to violin. I was willing to, um, I, uh, I would like, I'd practice like, you know, an hour violin a day. Um, a lot of that was because there was sort of like, uh, you know, in some, in some, you know, fields or some areas, there's like, there's just a real beauty to profess.
perfection. And I think this is true in a lot of arts, a lot of music, a lot of, frankly, everything. I mean, I see it even in my current life, in my current day-to-day job. But there was just like, hey, if you could, if you practice enough to get to play a piece perfectly, then it would like, it would be beautiful. And if you, like, along the way, it's like total dog shit until you get to the point of like perfection.
Um, there's kind of, there's a lot of beauty to that concept to me, which is like, you know,
once you get something totally perfect, it becomes beautiful. Um, that was, that was captivating
when I was, when I was a kid. So you were a perfectionist from a young age, and you're still a
perfectionist today. Yeah. I see a lot of beauty in like, you know, now I would say, I, I don't
think, I don't think we have the luxury to be perfectionist. I'm much more pragmatic now. Like, you know,
Like we were talking about, the world is extremely messy.
Like, the reality is, you know, stuff is super chaotic.
There's a lot of bad shit going on constantly.
There's a lot of good shit going on constantly.
But perfection is not really a, like, plausible objective.
We're never going to get perfection.
So I'm a lot more pragmatic now, but I do see a lot of beauty and perfection.
I mean, I'm also a perfectionist.
I battle it every fucking day.
Like I, I'm OCD.
I did, but, you know, and I've, I've read about it.
I've watched talks about it.
And I came to the conclusion, which I hate saying this,
because I am a perfectionist at heart,
you know, that perfectionism can get in the way of success.
Did you find that?
I mean, it sounds, it sounds weird even like asking you the fucking question
because you're the youngest billionaire in the world at age 24.
And, I mean, you're 28 years old now.
So it sounds weird saying, did perfectionism hold you back?
But did it?
I think, yeah, at some point I just like, I, like, some bit flipped.
And I realized, like, you got to just do the 80, 20, 20, lots of times.
Like, you got to do 20% of the effort, this 80% is good.
And you just have to be okay with that.
And you just have to do that over and over and over again.
So at some point I internalize that.
And it's like, it's like anathema unto perfectionism.
It's like the exact opposite.
And so now I think about it as like, hey, there's some things where perfectionism really is the right answer.
And there's some things where you just got to, you just got to like be okay with imperfection and just like speed is the objective versus perfection is the objective.
So.
And yeah, I would say now, honestly, I think more.
things like most things are speed is the objective not not perfection so yeah I would
say I've kind of had like a whole journey with it what what was it that flipped
you I think what like so there's this thing that Elon says to people at his
company when they're in like when they're like a crisis situation and he says like
hey, like, you know, let's say you're in a crisis situation and like people are like not
figuring out how to deal with it. And then he asked like, imagine there was a bomb strapped
your body that will go off if you don't come up with a solution of this problem. Like,
then what are you going to do? And then, you know, most of the time when people actually like
think through that scenario that they like focus and they get their act together and like
figure out like something to do.
And I think a lot of times startups are like that.
Like there's so many moments that are so life and death and so high pressure that you're just in these situations all the time where you're like, you have to act and you have to like do something.
Otherwise, you're toast.
And you just have to like figure out what the best plan of action is and the best course of action and just do it.
So I think that the realities of, you know, having to operate quickly, I think, just over time remolded my brain.
Interesting.
Do you have any brothers?
Do you have any siblings?
Yeah, I have two brothers, two older brothers.
They're both, I dropped out of college and both my brothers have PhDs.
So, but my oldest brother is an economist.
And my other brother is patient in neuroscience.
So they're, uh,
they're smart.
Yeah, they're smart guys.
Whole lineage of geniuses, huh?
Yeah, I think, uh, my, uh, yeah, I think my parents are, are, are probably still a little,
a little myth that none of us became physicists, but.
Oh, man.
Well, I'm sure they're, they got to be happy with how everything turns out.
out. I mean, wow.
Yeah, yeah, no, I think my, uh, my parents are super proud of me.
So where do you go to, where did you go to school? I mean, where do you, were you homeschooled?
I went to, uh, Los Alamos, uh, public high school, Los Alamos Public Mill School. There's,
there's, there's like, the town is 10,000 or so people. Um, now it's more because they do
pit, they do manufacturing of these, like, nuclear cores. So now there's a lot more people there.
But when I was growing up, there was like 10 to 15,000 people.
So pretty small town.
And there's like one public middle school, one public high school,
a few elementary schools.
And yeah, that's the, you know, I went to public school.
I was lucky.
Like, I think those are amazing public schools.
But it's like it is public school, like any other public school.
And then I would just get home every day.
And effectively,
like do math and science like every day what like what how do you go what what is the average
second grader but you said you learned algebra and second grade what what is an average
it's been a long time since i've been in second grade things may have changed but i'm pretty sure
it's basic addition yeah i think it's like addition maybe you get to your time tables
yeah maybe some multiplication tables yeah yeah i mean
And so how do you...
Dude, what is that like to go from the night before studying algebra to 2 plus 2 is 4?
Yeah, I...
I definitely remember in school, like...
I think like a lot of kids in general just sort of like generally kind of...
buying out of the whole thing
and that makes sense?
Like kind of just
tuning out and daydreaming
and just kind of like
ignoring what was happening in classes.
That definitely started happening.
And then I,
you know,
what I would actually do or focus on
is like go back and then do math at home.
I mean,
you're more advanced than the teacher.
There were,
I remember one time there was like,
there was,
the good thing
about what you know this the school of the site I went to is like the teachers were really like
also invested in my education like I think they um many of my teachers wanted to see me like thrive
and continue learning and um and that was that was awesome like I could I can imagine a totally
separate school where it's like the teachers don't care because you know um you know it's just
like their lives are chaotic the classroom's chaotic all the
kind of stuff. But I was lucky to have teachers who really cared. Yeah. I mean,
seems like it worked out well. I mean, for all the success that you've amassed in 28 years,
I mean, you're a very grounded person. I never really know what I'm going to get with you guys.
At breakfast, I was super impressed. I'm like, wow, this guy's like a really grounded person
and seems like a really good person. So, good, so kudos to you, man.
Appreciate it.
But hey, let's take a quick break.
When we come back, we'll get into MIT.
All right, Alex, we're back from the break.
We're getting ready to move into you going to college.
So you started at MIT, correct?
Yep.
How did that go?
Yeah, so let's see.
So I'll say the few years before that.
So I dropped out of high school, actually.
Oh, you dropped out of high school?
Yeah, I dropped out of high school.
Why not?
Why?
Wasn't challenging enough for it?
I dropped out a year early to go work at Quora, at the tech company.
I think a lot of people have run into Quora.
It's like the question-answer website.
But I went to go work at a tech company for a year.
And then after a year of that, I decided, okay, it's time to go to college.
So I went to MIT.
Yeah, 15-year-stump and PhDs.
It was maybe not quite that.
maybe not quite that early, but yeah, like by 1617, yeah,
was, I was more competent by that point.
What are you stumping these guys on?
So, well, at that point, that was like early, early AI.
It wasn't even called AI yet.
It was called machine learning.
That was like the more popular term.
And it was about training different algorithms that would, you know,
re-rank content.
It was just like all the like,
all the algorithms for like these social media style style things.
And it's like, okay, what algorithm creates the most engagement or what algorithm like gets people, you know, the most hooked on on these feeds?
That's what I, that's what I was working on back then.
Gotcha.
And so, so I went, so I worked, I worked for a bit and then I went to MIT.
And when I went, what are you, sorry to interrupt.
A couple more questions.
what is it like for you to be 16, 17 years old stumping PhDs?
I mean, is that just like normal life for you?
I mean, you know what I mean?
Like, does it set in?
Like, holy shit, I'm really fucking smart, you know?
Or?
I think something that I internalized pretty early on.
was that focus was really, really critical.
And so I didn't think necessarily,
I mean, like, I think a lot of people are really smart.
And I don't know if necessarily I'm like way smarter fundamentally
than a lot of these other people.
But I was like hyper-focused on math as a kid
and then hyper-focused on physics.
And then in high school it's hyper-focused on programming.
and then
and so if you
if you're like hyper-focused
and you're just like
you like really invest the time and the effort
you can make really
really fast progress.
So one of the things that I always like
I've believed in for a long time
is that if you
if you overdo things
like you like really like
invest lots of time, lots of effort
you go the extra mile, you go the extra mile,
you go the extra 10 miles and you're like constantly overdoing things,
then you will improve faster than anybody else by many times.
And a lot of other people,
maybe they're just not going the extra mile or maybe they're just not as focused
or, you know, they're like meandering a bit more.
And so that's really like, I definitely like, for me,
I think a lot of what I attribute being able to accomplish so much to
is really about focus and overdoing it, going the extra mile.
That's what I think boils down to.
What did your parents think when you dropped out of school?
You know, they, my parents, I think still probably,
really want me to get a PhD and do scientific research.
So they, I think they view, and I respect this belief.
you know, I think they view the pursuit of science, the pursuit of knowledge as above all else.
And so I would always tell them, hey, I'm just, you know, this is like a little detour,
but ultimately I'm going to come back and, you know, finish my degree and finish my, you know,
get a PhD and, you know, I'll be on the straight and narrow.
So that's what I always, what I was always tell them.
But, and then at some point it just didn't be, it wasn't believable.
So I just stopped talking.
like that. Why did you decide to go to school? I went to school because, um, uh, well,
there were two things. One was like genuinely I wanted to learn a lot about AI very quickly.
And I knew I could kind of do that while working maybe, but, um, the best thing to do really
would be to like go to school, like, invest all my time into it and, uh, and try to learn, learn very
very quickly. And then the second thing was like, you know, almost anyone you'll, not anyone,
but like many, many people, if you ask them, like, what were the best years of your life?
Like a lot of people will say their college years. And so I was like, shit, I can't, I'm not
going to sacrifice the college years. So, so, yeah, I went to school. I, like, I decided to just go
really, really deep into AI. I took all of the AI courses I could while I was at MIT. I was only there for a
year, but I started out, I remember I took a, I wanted to take the sort of like hardest machine
learning course the first semester I got there. And the my freshman advisor, the person who was like,
I had to get all my courses approved with was the professor of that course. Those just like
happened to be the case. And I like signed up for her course and then she, she, she, she,
said like you're a freshman you're you're you're not gonna you know this is gonna be this is gonna be
too much for you and I was like oh just give me a chance like you know I I just want to try it like
I'm really passionate about the topic and just like okay well let you uh let you go till the first
you know for the first few weeks and see how you do and so then I get in and then uh I remember I was
like I felt uh I felt like the stakes were really high because I like I wanted to like prove that I
could do this and so the first test rolls around
And I think by like sheer luck, it just happened to mostly be about things that like,
like there were a lot of things in the course I didn't understand,
but happening about stuff that I did understand in the course pretty well.
And I got like one of the top marks in that course.
And there were like hundreds of people in this class.
And so then after that point, the professor, let me do whatever I wanted.
And then, and so then I did all of these, I was, I went really deep into AI and all the,
and all the AI coursework at MIT.
And then this was the year when deep mind,
this like AI company out of London,
came out with AlphaGo,
which was the first AI that beat
the best Go players in the world,
which was viewed at that point as like
probably the hardest strategy game
or the hardest sort of like,
yeah, the hardest strategy game for AI's to beat.
And that was a big deal.
And then I started tinkering with AI on my own.
So I wanted to build like a camera inside my fridge that would tell me when my roommates were stealing my food.
And so I started tinkering with it.
And then I pretty quickly realized kind of what we were just what we were talking about earlier,
that data was going to be like that everything was going to be blocked on data.
Like if we, no matter what you wanted AI to do, that was going to run.
rely on data to make the AI do those things.
And so I looked around and I was like, nobody's working on this problem.
You know, you have plenty of guys working on building great algorithms.
You have plenty of people working on building the chips and the computational capacity and
and all that.
Nobody working on data.
So I was, you know, I was impatient.
You know, I was 19 years old.
I was kind of impatient.
I was like, well, if nobody's going to do it, I might as well do it.
Dropped out, started the company.
It was off to the races.
Damn. So did you perfect the refrigerator AI to tell you if your roommates are stealing your food?
That was part of the problem. I was like, I was trying to build it. And then I realized I didn't have anywhere near enough data. So it always like fire incorrectly and always have false positives, false negatives, et cetera. And then I realized like, that was like the light bulb moment. I was like, oh shit, if I really want to make this, I need like, like, like,
like a million times more data than I have now.
And that's going to be true for like every AI thing
that anyone ever wants to build.
And so that was kind of the genesis of the idea, really.
So you left MIT?
Left MIT.
I remember I moved.
I flew straight from Boston to San Francisco to start the company.
And basically immediately went from like...
At 19 years old.
19 years old.
Yeah, I immediately left and then I started coding in San Francisco.
And I was part of this accelerator.
It was part of this program called Y Combinator.
And it's kind of like the hunger games for startups.
So there's like it starts out.
There's 100 startups at the start of the summer.
And you're all like grinding away.
You're all working.
You're all trying to like show milestones and show progress.
and then it culminates at the end of,
at the end of the,
uh, of, of,
of Ycommer, at the end of it all,
there's a demo day where everybody presents their companies,
presents their progress and tries to get investment.
And, uh, and it, so literally,
it quite literally is the hunger games.
It's like, you go through this whole thing at the end.
If you get investment, you get money, you've won.
If you didn't, um, you've lost.
Uh, and, uh, and so that was like,
that was, that was the beginning of the company.
We ended up getting good investment.
What did you do?
Well, at that time, we were, we were, it was, it was around data for AI.
So it was all around like, how do we fuel data for, for, um, what people want to build
with AI.
But at that time, it was like so early that, like, the use cases were pretty stupid.
Like, we were helping one company try to detect, like, it was like a T-shirt
company.
They made like custom T-shirt designs.
And we're trying to help them detect.
when people were, like, use a t-shirt design that was, like, that was, like, that was, like,
unfit for it to print, like, you know, had, like, gore or, or, like, you know, all sorts of,
like, illegal stuff.
Like, if basically, like, identifying illegal t-shirt designs, it's kind of, like, stupid,
now they said.
And then we're helping another company.
It was, like, a furniture marketplace.
We're helping them, like, improve their search.
algorithm with AI. And then maybe a few months in, maybe three months in, we started working
with autonomous vehicle companies and self-driving companies. And then that ended up being
like the real meat behind our effort for the first three, four years. So we worked with,
you know, General Motors and Toyota and Waymo and, you know, all of the major automakers
in helping them build self-driving cars.
How many people are you competing against?
I mean, I think in anything you do in startup plan,
you have like tens of competitors.
And there were definitely tens of competitors at that time.
And so it was like, you know, these are competitive spaces.
But where, as we described, I don't mind competition from math competition days.
And so we were just like really focused on the problem, really focused on how do you, what are the best possible data sets for these self-driven cars?
A lot of that had to do with it's called sensor fusion.
So, you know, there's so many different kinds of sensors and how do you combine all these different sensors to get, you know, one output?
So like if multiple sensors sense a person, how do you like collect all that together to say that's one person right there?
and that's one car right there, and that's one, you know, bicycle over there.
So that was kind of our specialty as a company.
And then we're kind of off the races.
Just on that, we grew the company to like 100 or so people.
Let's go back just a little bit.
Okay.
So you go to San Francisco by yourself as a 19-year-old kid who had just dropped out of MIT.
How do you – you're immature at that point.
And so how do you develop leadership skills?
And I mean, how do you have, how do you have the know-how and make the connections to build a company as a 19-year-old kid?
Yeah, so let's see, what happened.
So basically, early on, like, it's about who you get investment from.
And so if you get...
So it was just you with the competition.
There was no team.
No team.
No team.
And then, and so I was coding every day.
And then I got, we got Y Combinator to invest in us.
And then we got this investment firm called Excel, which was, we were one of the early investors into Facebook to invest.
And so we got some good investors.
And then they helped me build the team.
like find people to hire.
I also hired, you know, what actually happened is I mostly hired people I knew from school.
Really?
Yeah.
So like.
Because you could trust them?
I think more that they could trust me.
Because I think if like at the time, if I went to like a 25 year old engineer in San Francisco and I was like, hey, we should we should work together.
I had no credibility.
Like I remember I waste, I like, I would get coffee with these people and I would say like, yeah, this is we're working on.
It's super cool.
You should join us.
And then they would all just be like, okay.
Cool.
I guess I'm going to go back to my job now.
So early on, I had no credibility, except for with people I went to college with, who we were just like friends and we liked each other.
And so I managed to recruit a bunch of them over.
They dropped out too.
Some of them dropped out.
Some of them just happened to, you know, were like seeing.
or whatever, finished school and then joined.
It was like a mix.
It was a mix.
And that was like the early nucleus of the team, the early sort of like cohort of the team.
And then we started picking up momentum because we're starting to work with large automotive companies.
We're starting to work with, you know, these very futuristic autonomous driving companies.
And then as momentum started to pick up, like, you know, we were able to grow and build out the team.
over time.
So where did you get your business since?
Or did you hire somebody to run all of that?
You were the mastermind behind everything.
I, maybe about a year in, I hired somebody literally with the title head of business.
But until then, I was just kind of like, I was just trying to like learn it all.
How did you get the product out there?
I just coded it all up.
And then there are like, I like put it out on one of these,
there's all these like websites where you can launch startups.
And I put it out on the one,
we put it out on one of those websites.
And it went like microviral, you know,
like viral among like people who were on Twitter to look for new startup ideas.
And then it was kind of,
that was like the early seed that just that ended up,
enabling everything to grow.
But it was like, I mean,
at the time it was,
I mean, it was tough going, you know?
You're like, like, I would just like,
I would just spend all my time coding.
Then every once in a while,
I would like post something to the internet,
to the internet and just like,
and then I would beg all of my friends,
I would say like, please go upvote this.
Please go like this.
Like, please like, you know,
give me some ounce of traction.
And, yeah, that was the early days.
Damn. Was it Scale AI at the beginning?
Yeah, Scale AI. Actually, it was called, it was, it was Scale API at first.
And then, because that was just like that website was available.
And then it became Scale AI like a year and a half later.
But, yeah, so the whole, I mean, early startups are so gnarly.
It's, I mean, it's really crazy.
If you look at like all these big companies and you're like, you know, think about what they were like in the early days.
they're all pretty, pretty rough and tumble.
But the coolest thing, because we started working with all these
automotive companies and working on self-driving,
it quickly became hyper-interesting
because, you know, this was like one of the great scientific
and engineering challenges of the time.
and we ultimately ended up being successful.
Like Waymo, one of our customers, is now launched and driving large-scale robotaxy services in, you know, San Francisco, L.A., Phoenix.
They're launching in more cities.
Wow.
Wow. It's pretty amazing.
Wow.
Damn.
And the company grew, how fast?
So, let's see.
I think the numbers are something like.
Five years.
you are the youngest five years from when you started it,
you become the youngest billionaire in the world.
Yeah, that's crazy to think about.
That did not feel obvious.
The first year, it was like, it was like,
for the first 12 months, it was like one to three people.
Like, it was like, it was like almost nobody.
It was like me and like one or two other people working on it.
For the first year.
That's it.
For the first one year.
And then after the second,
year, we go from that like one to three people and we start hiring more people, we get to maybe
like 15 or so people. And then that third year, we went from 15 or so people to, like, maybe 100. And then we're like,
then we're like 200 and then 500 and then we kept growing and now we're up to like 1100 people
but the first it was like really slow going at first and um yeah and we we we focused on first it was
autonomous driving and then and then starting um starting about three years in we started on
we started focusing on defense
and working with the DOD.
What are you guys doing in defense?
So we do a few things.
So one of the first things we did
was help the DOD with its own data problem
to help them be able to train AI systems.
So one of the first things that we worked on
was like, you know, they wanted to,
the DOD wanted to do
image recognition on satellite imagery,
SAR imagery, you know, other, like all forms of overhead imagery,
but they had this huge data problem, you know, just like me with the fridge,
they had the same problem, like how, you know, they need to be able to have data that
lets them detect things and all this imagery. And so we, the first thing we did was fuel
the datasets and data capabilities for the DOD. That was true for the first few years. And then
more recently we've been working with them to do large-scale fielding of AI capabilities.
What kind of stuff is DOD looking for in imagery?
So, I mean...
So let me also...
So basically the way I understand this is you don't need a human to detect something maybe like a nuclear reactor.
Is that, am I on the right track here?
Yeah.
Or a missile silo.
Or, yeah.
And so AI is detecting all these, which drastically reduces human error, human manpower, all that kind of stuff.
It's more accurate.
Yeah.
And it's, I mean, mostly it's scalable.
Like, I mean, the number of satellites in space has, like, exploded.
So we have so much more sensing today, like, way more imagery, way more sensing today than it's even, like, feasible for humans to work their way through.
Wow.
So that was
Yeah, that was like the first problem
How do you fuel it?
You, well, you have to build
So there's two parts
So first, you have to build
Effectively like a data foundry
You have to build a mechanism
By which you're able to generate lots and lots of data
To fuel these algorithms
A lot of it synthetically
So using the algorithms themselves
To generate the data
But then a lot of it
You still need
humans to validate and verify.
So one of the things we did actually for this whole project is
we created a facility in St. Louis, Missouri,
next to NGA, the National Geospatial Intelligence Agency,
and we produced a center for AI data processing
where we hired up imagery analysts
to be able to validate the outputs coming out of the AI systems
to ensure that we were getting the correct,
you know, we're getting accurate
and high integrity data to feed back into the AI systems.
Wow. Wow.
Damn.
Where do we go from here?
Yeah.
So then, so we were doing,
so we were doing lots of stuff around imagery
and computer vision.
And then,
and then we started working with the DOD on,
you know,
more ambitious and larger scale.
AI projects. So one of the things we're working with them now is this program called ThunderForge,
which is using AI for military planning and operational planning. So more broadly, so the basic
idea here is can you use AI to effectively like automate major parts of the military planning
process so that you're able to plan within hours versus taking many days?
This sounds like Palantir.
It's yeah, they target different parts of the problem and we target different parts of the problem.
And ultimately we work together pretty well.
But this is part of a broader concept that we have around what we call agentic warfare.
So the use of AI and AI agents in warfare.
And the basic idea is, can you go from these current processes where humans are the loop to humans being on the loop?
And so can you go from situations where, you know, these workflows have to go from a person has to do a bunch of work, then pass the next person.
They have to do a bunch of work, past the next person to the AI agents are just doing a lot of that work and humans are just checking and verifying along the way.
And it's a big change.
So going from, you know, if you compare both set up side by side, here you have individuals, humans with decades of single domain experience who are doing each step.
of this process. And then if you have the AI agents doing it, ideally you have AI agents who have
thousands of years of knowledge, all domain knowledge, and are, you know, a thousand times
faster at doing the actual tasks. And so it's all about taking, and this exists at many,
many different levels. So, you know, there's, you can think about this for the sensing and Intel
portion that we're talking about before. So, you know, can you accelerate the intelligence
gathering, you know, the process by which we take all the sensor data and turn that into
insight? You can think about it for the operational planning process, like how can you
accelerate that entire flow? You can think about it in terms of, you know, on the tactical
side, how do you accelerate tactical decision-making? So it bleeds into every sort of like level
of warfare, or every component. But at its core,
Or how do you use AI agents to be faster, more adaptive, and have humans just check their work?
So when you're talking about it helps with mission planning, especially in a tactical environment, because that's where I come from.
I mean, what is, it could be any example, but can you give me an example of how it speeds up the mission planning process in a tactical environment?
Yeah, so let's say that, so this thing that we have.
by the way, you know, we're working on it with Indopacom and Ucom right now, and we'll deploy it more broadly.
But let's say that there's a, what's a good example?
Let's say there's some kind of alert that pops up.
Like there's something that we didn't expect that we need to figure out how we're going to respond to.
Like what kind of an alert?
So, I mean, let's say there was like, you know, there's like, I mean,
can imagine at different levels, but let's say there's like a ship that popped up that we didn't expect.
Okay.
As a simple example.
So then that alert flows into a bunch of AI systems that are going to, the first step is sensing.
So what like let's look through all of our sensing capabilities and let's like go reanalyze
all of the data that we have and figure out how much do we know about that ship.
Right.
So now a person would like an analyst would go through and like do all this, you know,
know, all the ped and all the stuff to be able to undergo this work.
But ideally, you have AI agents that are just going.
They can look through all the historical sensor data.
They can figure out, oh, actually, there's like kind of a thing that showed up on this radar,
and there's kind of a thing that showed up on this satellite imagery,
and we can kind of like sketch together this, like, you know, the trajectory of this shit.
Okay, so you go through that process, you try to understand what's going on.
And then you go through and figure out, okay, what are the possible courses of action?
So once you have situational awareness, then what are the courses of actions against this particular scenario?
And you can have an AI agent honestly just proposed courses of actions.
Like, hey, in this scenario, given this ship is coming here, you know, we could fire at it,
we could just wait to see what happens.
We could reposition so that we're, you know, we're able to handle the threat better, you know,
all sorts of things.
We could reposition some satellites, so we have greater.
sense it. There's all sorts of different courses of actions that we could take. And then
once the AI produces those course of actions, it'll run each of those different course of
actions through a simulator. So it'll then run...
It war games at real time. Exactly. It'll war game at real time. And so then it'll run
through a simulator and say, okay, what's going to happen if we fire at it? Like, you know,
this is what we know about red forces. This is what we know about blue forces.
right now. If we fired it, this is like, you know, this is the war game of how that plays out.
If we just increase our sensing, like, these are the things that the Red Forces could do
to fuck us up. And like, that's the risk that we take on. And then the benefit is, because all
of this is automatic, you can run it these war games and these simulations a million times.
So it's not just like one, you know, military planners just like trying to like war game and
planning out, like, you know, in human time, it's like you could run a million simulations,
because you don't have perfect information, you don't have perfect knowledge. So you need to kind
of figure out, based on the uncertainties of the situation, what are all the potential outcomes
that pop out of that? Wow. And then, so you run like a million different simulations of each of
these different courses of action. And then you can give a commander direct, like, you give them this whole,
brief in presentation, which is basically,
these are the courses of actions we considered.
These are the likely outcomes in those courses of action.
We can show you the simulated outcome in each one of these scenarios.
So we can show you what it would look like in every one of those scenarios if it happened,
like representative simulations.
And then the commander makes a call.
Wow.
So it's, this is what it is.
This is what it's doing.
these are the possible courses of action
these are the consequences of each
action this is the percentages
yeah exactly and it spits that out in what
a matter of seconds
then now it takes a you know
probably takes even now it probably
takes a few hours because you know these models are a lot
slower than they will be in the future but
yeah I mean compare that to
I mean depending on the situation
like that could take you know
that could take days for humans to do today
like it's and and it's not from
lack of will or effort or or
capability. It's just, it's a really complicated situation. If a ship pops up out of nowhere,
like, there's a lot of stuff you have to consider. And so that's really the step change here.
It's just like a, like, dramatically accelerating situation awareness, dramatically accelerating,
like, an understanding of what the different course actions are, what could happen, what are the
consequences, and surfacing that to commander. Does it make a recommendation?
This is kind of an interesting thing.
We go back and forth if we want to make a recommendation.
Because ultimately, like, we don't want to just be, like, you know, we don't want to let commanders kind of like sleepwalk, if that makes sense.
We want them to, like, you know, our military commanders are the best humans in the world, like, considering all of the potential consequences of these different course of action.
And also considering, you know, and ultimately making a call based on those potential consequences.
So I think we want to ensure that commanders are still exercising their judgment in these decisions versus just, you know, making it easier for them to just say, go with what the AI says.
Interesting.
Wow.
But this, but then, okay, think about what happens next.
So, and this is where stuff gets really freaky.
So let's say that
Obviously in a world where just the Blue Force,
just the United States has this capability,
that's great.
You know, we're going to be running circles
around everyone else.
But then what happens if the Red Force,
you know, China, Russia, whoever,
also has that capability.
Then you're in this situation where
I've war-gamed out the whole situation.
You know, they've instantaneously war-gamed out the whole situation.
and then it's like
then it
I think I honestly think
so then it's like
we know
and you know
like blue forces
red forces we both know
that we both have like
you know this perfectly war game scenarios
which avenue do you pick
and then it becomes this really complicated
almost like psychological
you know
kind of kind of situation
which is like
then it like all comes down to how good our intel is
so how good is our intel about
that commander. How good is our intel about
what their collection capabilities are? How does our intel about, you know,
what they likely know about us and vice versa?
And it gets pretty...
So this is actually...
Let's just...
So let's say China, Russia,
our enemies have this capability. We have this
capability. Then it
kind of becomes... It's like the same
process that we deal with now. Who has the better intel, right? It's just developing and you're going
to a course of action quicker and the enemy's doing the exact same thing quicker. So it's essentially
it's the exact same thing that we're doing now, but faster. And so if we develop it first,
then we achieve basically global domination. Am I correct here?
Yeah, and I think timing really matters here because if we get this capability
And this will go for I mean there's like there's way more there's way more AI will be able to do
But let's say we get this capability, you know
A year ahead of adversaries
Then you're then like we're just going to be able to respond so much faster
The analogy I often use is like imagine we were playing chess
But for every one move you take I can take 10 moves
like I'm just going to win.
And that's what,
that's the asymmetric advantage
that comes out of this,
of this capability.
And then once it,
but then once it equalizes,
then it's like this very,
you know, it's like, to your point,
becomes this like adversarial,
intel based,
you know, capability based kind of conflict.
How do we,
I mean, how do we,
combat our adversaries from having this type of Intel, from having this type of AI system.
So I think then, I mean, China is demonstrated with Deepseek and, you know, models have come out since then,
they're going to be very competitive on AI. And in, I think in 2024, so last year, there were
something like 80 contracts between
large language model AI companies in China
and the People Liberations Army, the PLA.
That number is not 80 in the United States.
The United States is like way, way less than 80.
So they're very clearly accelerating the integration of AI
into their national security and into their military apparatus very quickly.
I don't think at this point,
realistically, we can stop them from having this capability that I described.
So then you go to the next layer down.
So Intel.
So, well, the next layer down, the next two things that you look at is, okay, how does
AI impact Intel and how does AI, how can we, what is the adversarial AI dynamic?
Like, can we use our AIs to sabotage their AIs?
can they use their AIs to sabotage ours?
And it's like AI on AI warfare effectively.
Then when you look at that scenario, okay, so let's dig into that.
The first level analysis here is kind of what we were talking about before,
which is that probably just boils down to how many copies of these AI systems do I have running versus how many copies do you have running?
So it turns to a numbers game.
If I have 10,000 AI copies running and you only have 100 AI copies running, then I'm still going to run circles around you.
And that boils down to who else.
So let's say you have 100 AIs.
I have 10,000 AIs.
I will take half of my AIs.
I will take 5,000 of my AIs and just focus them on hacking your AI.
So they're all going to be looking for vulnerabilities in your in your in your in your information
architecture in your data centers I'm going to look for vulnerable I'm going to you know I'm just
purely focused on cyber hacking of your 100 AIs and then my other 5,000 copies are going to
do the military planning process for myself then then look at thinking about the adversary
I have this choice I have 100 AIs if I have I
have them all focus on doing the military planning process, I'm going to get hacked, because I'm not doing any cyber defense.
And then even if I have all of them focus on cyber defense, even those numbers are bad, it's like 100 AIs versus 5,000 AIs from you.
And so I probably still get hacked. So the numbers end up mattering a lot.
If even if they had, even if the other adversary, let's say it's only a two X advantage. I have 10,000 copies running and the adversary is 5,000 copies running.
I can do the same thing.
5,000 my copies are just focused on hacking your AI
so that your AI is incapacitated or has incorrect information
or is poisoned in some way,
like basically is incapable incapacity for some reason.
And the other half of my eyes are focused on the military planning process.
Again, the adversary is screwed
because to properly deal with a cyber attack,
I need probably all 5,000 copies to be focused on cyber defense.
and then I have no capacity left to do the military planning.
Wow.
So it really turns into this like very, like, just in the same way that you would command your forces today,
like all of your various, your forces across all domains to, like, try to pincer out maneuver the enemy.
You'll do the same kind of planning for your like AI Army, so to speak, or your AI.
allocation of assets.
Yeah, your allocation of assets, exactly.
And a lot of it will be, okay, how many am I dedicating towards hacking and sabotaging the opponent?
How many am I dedicating towards my own military planning and war gaming process?
The other thing is how many you allocate towards, you know,
the other key component here is drones and how many you're allocating.
looking towards doing the like very tactical mission level autonomy to accomplish, you know,
mission level objectives.
But it'll be, it'll be like, I think it really boils down to ultimately who has more
resources.
And then what are those resources?
That's going to be about large-scale data centers.
So who has bigger data centers and more power to run all these AI agents?
And who makes the determination?
of how many AIs we're going to put in tactical environment, how many AIs are going to go after
cybersecurity trying to hack into the other AIs.
Is that a human or is that another layer of AI that spits out exactly what you just said?
This is our situation.
Here's the courses of action.
Here's the consequences of what happens.
So is it just AI after AI after A.
AI that's doing all of this, all these simulations.
Yeah, that, yeah, no, you're exactly right.
Then I, yeah, exactly.
You have another AI that's planning out and mapping out, you know, how should I allocate
my AI resources to properly deal with the adversary, given why I know about the adversary.
And then, so then what are the ways in which, you know, what are, so then what are the
key dimensions that wouldn't give you an edge versus your adversary?
Well, it's if A, your AI is different somehow.
So it actually is like hard for your adversary to know exactly how you're, how you would act.
Like basically strategic surprise in some form in the form of like a different thinking process or a different sort of like way of reasoning of the AI systems.
And the other one is like ambiguity of how many, how many, how many, what your resources actually are.
Like, if somehow I can make the adversary think that I have way fewer resources than I actually do or way more resources than I actually do, that'll be a critical element of, yeah, of strategic surprise in those kinds of situations as well.
Wow.
Would an AI be able to be able to, would AI be able to alert, if it, will it know it's been had.
act.
So, yeah, this is a great question.
You know, right now, probably yes.
But it's definitely possible in the future that you will be able to effectively hack into a system or somehow poison an AI system and have that activity be relatively untraceable.
because you would basically
you would
hack into that AI system
so there's two ways you would do it
one is you poison the data
that goes into that AI
so I'm not hacking into the AI itself
I'm just poisoning all the data
that's feeding into that AI
such that at any moment in the future
I can activate that
AI and basically hack it
without any sort of active intrusion
but I can
just do it because I've poisoned, I've like poisoned the AI that go, the data that goes into the
AI such that if I like, you know, say some past, it alters the decision making process.
Yeah, exactly.
But the, but the end decision maker, which would be a human, would not realize that.
Yeah, exactly.
Okay.
So, so data poisoning is going to, is, but this is what's so terrifying about deep seek.
One of the reasons why deep seek is really scary is, um, uh, uh,
You know, China chose to open source the model, right?
So there's a lot of corporates, large-scale corporates in the United States, that have chosen to use deep seek because they're like, oh, it's a good model and it's a good AI and it's free.
Why not use it?
But deep seek itself as a model could already be compromised, could already be poisoned in some way such that, you know, there are characteristics or behavior or ways to activate deepseek itself.
deep seek that the CCP and the PLA know about that that we don't.
So that's why deep seek is scary and why.
So the first area is just data poisoning.
So basically, can you poison the data that we're using to train the AIs such that, to your
point, I've altered the behavior of your AIs in a way that you don't know about,
and that's going to have cascading effects across your whole
military operation. That's one. And then the second one is, um, uh, is basically, uh, you know,
if, if you're able to do the whole operation quickly enough, you basically hack in and you,
uh, it's kind of as we were talking about before, you would like destroy the traces. You
destroyed any sort of trace that like you had hacked in and you have an agent that like hacked in,
like removed that trace and the evidence of you hacking in,
um,
uh,
before anybody,
before it was alerted or notified.
That's maybe a bit more extreme,
but definitely the data poisoning stuff is,
is more concerning in the near term.
Damn.
So how would you,
how would you defeat it?
I mean,
it's,
so if,
if it were to be hacked and you knew it was hacked,
then AI becomes completely irrelevant, correct?
Well,
the issue is we're still going to rely on it for lots of things.
So,
It would have to come down to the human mind again.
And you would have to, let's say it's a ship,
you would have to know everything that you've done in the history
so that it doesn't detect what tactic you're going to use
and do something just something that's never been seen before
in order to confuse the adversary's AI, correct?
Yeah. So you have to make a drastic change that you don't know if it's actually going to work so that the AI doesn't detect. Oh, shit, we've seen this before. This is what it's about to do.
Yeah. Yeah. So to your point, yeah, strategic surprise becomes the name of the game very quickly. And how do you create an operation such that you maximize the amount of strategic surprise against an adversarial AI? That's one. And then honestly, the second thing that's really critical is a lot of this will just plain up, boil down to, like straight up, boil down to how many copies you have running.
and how large your data centers are
and how much industrial capacity you have
to run these AIs, both centrally and at the edge
in all the theaters and all the in every environment.
How fast will it learn new technology?
So let's just take, for example, Serronic.
They're making autonomous surface warfare vehicles
or Palmer Lucky, you know,
doing the autonomous submarines.
And so when, what am I trying to say here?
So let's say we're at war with China.
China has all the data, all the history back from whatever,
World War II on different capabilities that we have.
And what happens when a new,
when something new is introduced under the battle space,
like seronics, autonomous vehicles, or epirus,
or Palmer's rockets or his submarines.
How would the AI get the data set to make a decision,
or not make decisions, but come up with what you're talking about,
courses of actions, consequences, what it's about to do,
probability of what's going to happen.
How fast will it be able to learn when something new,
is introduced onto the battle space.
Yeah, this is
a great question. In general,
so the
first time it sees
a totally new, let's say
a USV or UV or
whatever might be that it's never seen before,
it won't be able to predict
what's going to happen. Because
it won't know
how fast it's going to go. It won't know
what
what munitions it has.
It won't know what its range is.
It won't know all the key facts.
In less, by the way, they have really good intel,
and they already know all those things because they've hacked us.
But let's assume they don't know.
So the first few conflicts, it's not really going to be able to figure out what's happening.
And that's a key component of strategic surprises,
always having new platforms that won't be sort of simulatable, let's say,
by enemy war gaming tech.
So that's definitely part of it.
But at a certain point, it's going to know what the hardware are capable of, and it's going to be able to run the simulations to understand how that changes the calculus.
Because ultimately, right, what's going to happen is, and some of this stuff like, you know, this is like, you know, some of this stuff is, you know, some of this is dissonant because obviously if you look at what happens today in the military, it looks nothing like.
like this but let's play the play the tape forward and like see what happens in the future ultimately
you're going to run large-scale simulations and it's going to figure out hey this new you know uh
unmanned surface vehicle has this much range it can go this quickly it can maneuver in this way
it has this kind of munitions um it has this kind of connectivity uh it is vulnerable to these kinds of
you know,
EW attacks,
whatever they may be.
It can be jammed in these ways,
and those will all just be
parameters for the simulation
to run.
So I think...
But initially,
it would have no recommendations.
Initially, you'd have
strategic surprise.
So Opsack,
when it comes to weapons
capabilities,
is still just paramount.
And it will...
I mean,
will it always come back
to the human mind?
Yeah,
I believe
so. I believe that, you know, we have this concept that we talk about a lot, which is human sovereignty.
So, um, AI systems are going to get way better, but how do we ensure that humans remain sovereign?
How do we make ensure that humans maintain real control over what matters?
So maintain control over our political systems, maintain control over our militaries,
maintain control over our economic systems, you know, our major industries, all that kind of stuff.
And so, and I believe it's pretty paramount in the military.
You're not going to want to take, certainly just as like a simplistic thing, we're not going to give AI the capabilities to unilaterally fire nuclear weapons.
Like, we're never going to do that.
And so ultimately, so much of what is going to become really critical is the aggregation of information, simulations, wargaming,
planning to humans to ultimately make the proper decisions.
And by the way, so much of this will start bleeding into the diplomatic decision, like
diplomacy, diplomatic decisions than you be made.
It'll bleed into like, into economic warfare.
Like, it'll bleed into...
I mean, this goes all the way into...
I could see this going all the way into relationships.
building with with uh in between nations should we you know what are the what are the outcomes if
we become allies with russia yeah you know what what are the courses of action what are the
consequences i mean is it does it so it leads into everything politics allies adversaries
warfare economics all of it
Yeah, totally. Because if you ultimately boil it down, what is the capability?
The capability is sensing and situational awareness.
So I'm going to know, I'm going to be able to go through troves and troves of data,
OScent, other forms of like open source intel, different kinds of various intel feeds that I have.
And note, what is the current status? What's going on?
What is the current situation?
It'll be able to aggregate all that data in to provide a comprehensive view.
as to what those behaviors are and it'll give you the ability to predict um and it'll give you the
ability to effectively play forward you know every potential action you could take what would happen
in those scenarios with some probabilistic uh view some some probabilities and then yeah you're going to
use that for every major decision like the military and the government should use this for
every major decision we make.
We should do it for trade policies.
We should do it for diplomatic relations.
We should do it for, we should do it off, you know, we're looking outwards, but honestly, we should also do it for like internal policies.
Like, you know, what are our healthcare policies?
What are our, you know, all that kind of stuff too.
But so it will, this capability of sort of effectively
all domain sensing plus planning
is going to be paramount.
Do you,
and I have so many questions,
do you see a world where
AI becomes
so powerful
throughout the world that it becomes obsolete?
And we're right back to where we are.
We were, I don't know, 10 years ago, 20 years ago,
where it's all human decision making?
Well.
Will it outdo itself?
A few thoughts here.
I think so one of the things,
so I think the first stage of what's going to happen is like,
kind of what I'm saying,
like human is the loop to human on the loop.
Like we're going to, right now humans do a lot of just like,
like brute force manpower work in all sorts of.
a different place, you know, in the economy and in warfare, etc.
That'll, that's, that's like the first level of, of major automation that's going to take place.
So then it's like about, you know, your strategic decision making and your ability to,
and your ability to make high judgment decisions that consider long term, short term, medium term,
all that kind of stuff.
at a certain point of, well, as the AI continues to improve and improve and improve and improve,
it will operate at a pace that is very, very difficult for humans to keep up with.
And in, you know, this will start happening in R&D first in research and development.
Like, AI will be able to start doing lots of scientific research, lots of R&D,
into new weapon systems, lots of R&D into new military platforms, et cetera, much faster than
humans would be able to do. And then humans will just check over their work and decide.
And so it's going to sort of race faster and faster and faster. And so then what happens,
I think what it'll do is it'll create dramatically more weight on the few decisions that
humans make. So any decision that, like, all the way to the extreme, right, is, you know,
the president or, or, you know, to whomever, making decisions about, do I let my AI collaborate
with another country's AI? Like, that'll be, like, a decision of just, like, dramatic consequence,
much higher consequence than, like, similar decisions today. So I think it, almost to your point,
it will, as it accelerates,
we'll end up at a place where you're right,
it all boils down to human decision making,
but those decisions will carry like a thousand times more consequence.
How do you decide who you're going to work with?
I mean, it's an international company.
Yeah.
So we've had...
Who all are you working with?
Well, so first thing is we're pretty
We're pretty picky about who we work with
Ultimately just because we have
We only have so many resources
And building these systems and building these data sets
Like is pretty involved
As kind of we've discussed
So, you know, our aim generally is
How do you work with the best in every industry?
You know, how do you work with, you know, like kind of
I was mentioning the number one bank, the number one farm
And number one Telco
number one, military, et cetera.
The only addition to this that I would say we viewed as important is how are we, as we play
the tape four and everything we're just discussing, it's really important that as much of the
world runs on an American AI stack versus a CCP AI stack.
that becomes really, really important.
And it matters not only for ideology
and, you know, kind of as we were talking about
before, like, propaganda and control and all that kind of stuff,
but it also really matters just for, like, you know,
at a pure operational level,
like, we're going to want to be able to have
as extended of AI capabilities as possible.
So, okay, so the way I understand this is,
you're working with
X country
we'll just say
we would you say
country X
you give
country X
the AI model
to utilize
for whatever they're doing
let's just say
warfare
we own
but they
they have to tap
into a U.S. based
data center
am I correct here
and so as long as we
control the data center
that's feeding that
AI model
we essentially
essentially own it and that in country X just has to trust that scale AI has their best interest
yeah it's like next level and if they change if they change let's say country X now forms an
alliance with China they decide they don't want to be a part of America then we just
yank the AI or the not the AI the the the data that
feeds that AI or manipulate that data to where it's essentially been hacked.
Correct?
And that's how we keep ourselves safe.
Yes.
And then with the addition, like, I think of the way that at least we think about it
today, and I think a lot of people think about it a day, is like, it's okay for the
data center to be located elsewhere, located in the country, as long as it's U.S.
owned and operated.
because then we still have control
in any sort of scenario that happens.
And the only other thing I would say is
we're much more focused initially
on just low stakes uses of AI.
So can you use AI to help
the education industry in one of these countries?
Or can you use it to help the healthcare industry?
Or can you use it to aid in
in like, you know, permitting processes?
or, you know, I think low stakes use cases matter a lot more initially.
But I really do think, like, you know, we have this concept of geopolitical swing states.
There are a number of countries right now in the world where whether they side with the U.S.
or China over time is going to have immense consequences for certainly what a potential
conflict scenario looks like, but also even what like the long-term Cold War scenario
looks like, like what happens over time in this as, you know, our countries are interacting.
So I view AI as like one of these key elements of diplomacy and long-term sort of, like, long-term
strategic impact in the international war game.
How would AI be implemented into our government?
I mean, I can't remember exactly what you said, implemented to run, you know, our political sphere.
What does that look like?
Yeah, so.
Because so much of that is people's values and what people believe in and stand for.
And, you know, I mean, like today, for example, I mean, country is probably more polarized than it's ever been.
And so how do you get an AI model to run government when it is this polarized and there's so many different ideologies and part of the country's way over here, the other part's way over here.
How would an AI model run that?
Yeah.
So we have this concept of kind of like agentic warfare, agentic government.
So can you, just like the same thing, can you take these very inefficient processes in government and start replacing those with AI-related functions so that you're just improving efficiency and improving outcomes?
Give me a specific example.
Yeah, so one, like one super simple one.
Right now, I think the average time it takes for a veteran to see a doctor in the VA is something like 22 days.
It's way too long.
And part of that is because of a host of antiquated processes and workflows and, you know, just in general, that system's not working.
I think we all look at that and say, that's not a functional system.
And so can you use AI to, you know, AI agents to automate some parts of that process, automatically get whatever approvals need to be gotten, get whatever information needs to be gotten, such that that 22 days becomes a day or two or something like.
like that. That I think is like a no-brainer just pure win for government efficiency overall.
Another one that other ones that are like big are like, you know, permitting processes. So
if I want to build a new data center somewhere or even I just want to like remodel my home,
the, you know, permitting processes depending where you are, it could take, could literally
take years for all of that to go down. And part of that is like there's so many different
approvals they need to happen. There's so many like there's all.
these like different workflows and things that need to like happen what if instead we just codified
what are the rules of the system and an AI agent just go automatically go through that permitting process
so that you could get that permit or or get the permit denied within like a day right um so and just
that times a million like like the uh like um like one of the things from from doge that they found
right is that you know the uh the retirements are stored in the mine
Iron Mountain Mine, a literal, like, iron mine, or like the paper copies of the retirements for all the federal employees.
Like, can we just take that, which is two generations behind in terms of tech?
Like, it's like literally pen and paper.
And then use AI to go from two generations behind to two generations forward.
Like, can we just automate as much of those processes as possible?
So I see it's just like, you know, all over the place.
There's so much low-hanging fruit in terms of just making current government services and government processes way more efficient.
I think that I haven't met anybody who doesn't think this is the case.
So that's just all the level one stuff.
I think the – yeah, that's just all the level one stuff in improving how our government operates.
Would it eventually replace politicians?
That's a good question.
I think ultimately, like, we, um, so first off, just like, um, uh, taking a step back,
it's definitely the case that policy, the speed of policymaking and the speed of legislation
and the speed at which the government reacts to new technologies, like, that's going to have to speed up.
Um, you know, we, uh, I've spent a lot of time in D.C.
trying to make sure that, you know, as a country, we get the right kind of AI legislation
and the right kind of AI regulation to ensure that this all goes well for us.
It's been years of trying to get that done.
You know, we still haven't really figured that out as a country.
What is the right AI regulatory framework?
Like, that's still, it's still undecided.
I mean, how do you even describe this stuff to the dinosaurs that are still sitting in D.C.?
I mean, we've got people stroking out on camera.
We've got people literally dying in office.
I mean, we got people up there that probably can't even figure out
I'd open a fucking email.
And then you come in, 28 years old, built scale AI.
I mean, I just going all the way back to when, you know, Zuckerberg's sitting there,
you know, talking to Congress.
I mean, I don't agree with everything he did and whatever.
It doesn't matter.
But I look at that and I'm like, you guys have been sitting in D.C.
Probably don't even know how to open your own email.
And you're talking to a tech genius who's trying to dub this down and make you understand.
I mean, I get one day with you, you know what I mean, to try to wrap my head around this.
And they have 50 million other things they're dealing with.
They're dot up to speed on tech.
I mean, how do you even begin to.
tap in.
I mean, I think a lot of it, I think
the first thing,
and I think this is like
a lot of people in the know
understand this. Like, a lot of the minute decisions
really end up being made by staffers, right?
And I think, like, generally speaking,
like, staffer, you have to be extremely competent
as a staffer, no matter what.
Like, there's just, it's a very
chaotic job. There's a lot that's, there's a lot that's going on, and they have to make very
fast decisions. The other thing is I think, I think analogies are pretty helpful. Like, I think,
you know, everybody alive today has seen the pace of technology progress just increase and
increase and increase and increase. Like, I think that, you know, you'd be hard-pressed
to offend anyone who doesn't believe that AI will be this world-changing technology. Now,
exactly how it will change the world, I think that's where it gets.
fuzzier, but it will be world-changing technology.
But the issue is like, I mean, the political system just doesn't respond very quickly, right?
And that's going to be very harmful.
I mean, we need to be able to respond very quickly to these new technologies.
And so, and I think they'll become more and more obvious.
Like, I think, I think as AI and other technologies accelerate, it'll be very obvious that, like, the world will just change so quickly.
And frankly, I think voters are going to demand faster action.
And so I think, I think our government is set up to, to accelerate, but that's when it's happened.
How do we power all this?
I mean, that's a big discussion, you know, and everybody is seeing.
so apprehensive to go nuclear.
The grid is extremely outdated.
I mean, we just saw the light flickers here
about, I don't know, 30 minutes ago.
Power outages happening all the time.
There was just a big one, all of Spain.
Portugal, Italy.
I mean, it's happening all the time in the U.S. power outages.
How are we going to be able to power all this stuff?
What would you like to see happen?
Yeah, I mean, first of all, if you look at, if you take a graph of Chinese total, China's total power capacity over the past 20 years versus U.S. total power capacity for the past 20 years, the China graph is like straight up into the right.
They're just adding crazy amounts of power.
They've doubled it in the last decade, I think.
They've doubled it?
Doubleed their power capacity in the last decade.
and the United States is basically flat.
It's growing like a little bit.
And so we're like that's what's happening right now.
Right now China's doubling every decade or so.
U.S. is basically flat.
And we're looking at, you know, to just power the data centers that today AI companies know
they want to build, we're going to need something like a doubling of our energy capacity.
And that needs to happen very, very quickly, like almost, you know, that has to happen almost
immediately.
And so you have to believe that our graph is going to go from totally flat to vertical,
faster vertical than China's energy growth.
And China in the meantime is just is growing, is growing perfectly quickly.
they'll accelerate, they'll add more power to their grid.
Like, I think it's very hard to imagine realistic scenarios where without drastic action,
the United States is able to grow its energy capacity faster than China.
Now, where are we on the – so if China's going straight up and we're flatlined,
I mean, does that mean, are you saying that China has surpassed our power capabilities,
or are we still above them even though they're on the rise?
They're definitely above us, because they have a bigger problem.
population and they have way more
industrials.
So they have
all double chat.
They definitely have more power total than us.
More power generation capabilities.
And by the way,
it's actually not rocket science why that is.
If you then break that down to sources of that power
in China,
it's because coal is like 80% of that.
Yeah, they're all coal.
Yeah.
It's just tons of coal.
And then
we've actually like, if you look in the U.S., renewables have grown a lot, but a lot of it, the reason the overall number is flat is because we're using renewables to replace coal, natural gas, like fossil fuels.
And so when you net it out in the U.S., we're flat, and then in China, it's straight up.
So that's the first thing.
Like, we need drastic action.
You know, the administration has the National Energy Dominance Council.
We've sat down with them a few times, like that we got to, we have to, we have to, we have to,
take drastic action to enable us to at least start matching their speed of adding energy to the
grid and ideally surpass it. That's like, that's the first thing. The second thing, like you're
talking about, is our grid is extremely antiquated. And that's a major strategic risk. You know,
I don't know what the cause or the source of the outage across Spain was. But, you know,
some people think it was a foreign actor or some kind of some kind of cyber attack of some sort.
I guarantee you the US energy grid is extremely susceptible to large-scale cyber attacks.
It would be, you know, and the way, you know, the sophistication of these cyber attacks sometimes is like so stupid.
It's like if you find the right like power plant login terminal to go in,
into, sometimes people don't change the username and password from the default, which is
username and password.
And so you can just find like some power station in like Wyoming that still has an, the
username and password is username and password.
You log in and you can shut down the entire power in the entire region.
So the like the so so our grid just because of how antiquated is, how decentralized it is every
all of that is hyper, hyper susceptible to.
to cyber attacks,
hyper susceptible to foreign action,
foreign activity.
And that matters now.
Like right now,
if you take the energy grid
in a major city,
people will die.
So it's like,
it's bad now.
But then let's go back
to what we're just talking
about with AI.
Like, let's say
we have large scale
AI on AI warfare
with China.
They just take out
the power grid,
take out our data centers
and the power fueling
those data centers,
and then we're sitting ducks.
I mean,
not only the,
that, but it's my understanding that China actually produces and manufactures a lot of the
major components that go into our grid, like the transformers.
If we don't even, to my understanding, we don't even check those for malware, Trojan
horses, shit like that.
In fact, DOE actually did an inspection on one and never even released the results of what
they found, which probably being.
means they found some shit.
And I mean, I just, I don't know how we combat that.
I mean, just like, what is, where did that happen elsewhere?
Like, look at Salt Typhoon.
Like, this was a recent hack that was declassified, which is that Chinese malware and
cyber activity, like, basically had fully infiltrated our major telecom providers.
I think AT&T was like entirely like entirely compromised by this hack called Salt Typhoon from the CCP.
And that's, they did that so that they could read all the messages, like all the SMS, all the audio.
They were able to capture as part of that as part of an intel gathering operation.
But if they're able to hack into our telco, they've sure has helped.
they're clearly capable of hacking into our energy grid,
clearly capable of hacking
to any of our other critical infrastructure.
And it just goes back to what we're talking about,
like, the energy grid, A, if we can't produce enough power,
we're hosed, and B, if the adversaries can take out our power
at will, we're hosed.
And so we have this major, major vulnerability as a country
on just like the cyber posture of our energy grid.
I think it's like, I think it's one of the, the biggest, like, very obvious, like, flat-out, like, clear vulnerabilities of our overall, of our entire country.
A, just, like, you create civil unrest.
You can, like, take, you know, imagine you took Houston's power grid out.
People would die.
And you cause, like, all sorts of chaos.
But then if you, but then you take out these data centers, you take out military.
bases, you take out radar systems, you take out, you know, you name it, you can take out
almost any piece of homeland infrastructure, and that creates huge strategic openings for
adversaries.
I mean, what, you have to run in these circles.
I mean, you're building massive data centers, correct?
And so, like, when you go to D.C. and you're advocating, hey, we need more power,
and you just, I didn't, what's the association you met with?
the National Energy Dominance Council
What do they say?
They totally agree.
I mean, they know we have to build more power.
And then it's about, so then you get to the next layer of detail.
It's like, okay, how can we, how do we accelerate nuclear?
How do we accelerate the permitting process?
What are existing power generation capabilities that we turned off, that we can turn back on?
Like, you go through all the natural things to do.
Like, it's, I mean, I think we know what to do.
The questions that we can get out of our own.
way. And then if our grid is so antiquated that even that vulnerability, like, kind of means
that we can be taken out any time. I mean, I may have made an assumption. Are you building
data centers? We ourselves are not building data centers. We partner with companies that, yeah,
that are building, you know, the largest data centers in the world. Okay. And so I've also heard
rumors that these major data centers are starting to just create their own power source. Is that,
Is there any validity to that?
Yeah, so a lot of designs these days involve,
can you just create an SMR small, like a nuclear reactor per data center?
Can you basically have a nuclear reactor co-located with the data center
to power that data center's capacity?
Which I think is a good idea.
The issue is like, I mean, China is going to be way ahead of us on that.
The largest nuclear power plant in the world is in China.
So, you know, we're, obviously we need to lean into nuclear.
That needs to happen.
Obviously, we need to lean into all power generation sources when you kind of let all the above approach to power generation.
But even that doesn't get us to a posture where you're confidently exceeding China.
You're just kind of catching up to where they are.
And so, I mean, this is a huge, a huge issue.
Yeah.
Let's take a quick break.
When we come back, I want to dive more into China's capabilities and our capabilities.
All right, Alex, we're back from the break.
We're getting ready to discuss some of our capabilities versus China's capabilities.
And, you know, we just got done kind of talking about power.
Is China leading the U.S. in any other realms when it comes to the AI race?
I mean, Xi Jinping has said himself, you know, the winner of the U.S.
AI race will achieve global domination?
Yeah, I think, well, the first thing, almost as you're mentioning to understand, is China has
been operating against an AI master plan since 2018.
They, the CCP put out a broad, whole of government, you know, civil military fusion plan
to win on AI, like you're mentioning, Xi Jinping himself, has been, has spoken about how AI,
is going to define the future winners of this global competition.
In military, from a military standpoint, they say explicitly,
hey, we believe that AI is a leapfrog technology,
which means even though our military is worse than America's military today,
if we overinvested in AI, we have a more AI-enabled military than theirs.
We can leapfrog them.
So they've been super invested.
Right now, I think the best,
way to kind of paint the current situation is they are way ahead on power and power generation.
They're behind on chips, but catching up on chips. They are ahead of us on data.
China's had, again, since 2018, a large-scale operation to dominate on data. And today, in
In 2023, I think, there were over 2 million people in China who were working as working inside
data factories, basically as data lablers or annotators basically creating data to fuel into
those into AI systems.
I think that number in the U.S. by comparison is something like 100,000.
So they're outspending us 12 to 1 on data.
They have over seven cities, full cities in China that are.
dedicated data hubs that are basically powering, you know, this broad approach to data dominance.
And then on algorithms, I think they are on par with us because of large-scale espionage.
So, and this is, I think, one of these open secrets in the tech industry that Chinese intelligence
basically steals all of the IP and technological secrets from the United States.
There are a bunch of very concerning reports here.
So one is there was a Google engineer who took the designs and all the IP of how Google designed their AI chips
and just took those and moved to China and then started a company on top,
using those designs.
The way he got those designs, by this way,
it was this guy, Leon Ding, I think.
The way he stole the data out of Google's corporate cloud, by the way,
was that it was so stupid.
He just took all the code.
He copy-pasted it into Apple Notes,
into like the Notes app,
and then exported to a PDF and printed it,
and just walked out with it.
That's it.
That's it.
So, this was later discovered, you know, we found out this happened, but for months, we had, you know, we had no idea that they'd stolen all this critical IP.
Stanford University, this just came out last week, Stanford University is entirely infiltrated by CCP operatives.
Few crazy facts.
So first, by law in China, any Chinese citizen must comply with Chinese, with Chinese, with,
CCP intelligence gathering operations.
So if you're Chinese citizen, you're living in the United States and the intelligence agencies
in trying to reach out to you, you have to comply with them.
And so you have to give them what you're seeing, what you're finding, et cetera.
So that, and there's tons of Chinese nationals, Chinese citizens in across all the major
elite universities, across all the major tech companies, across all the major AI labs, like,
they're everywhere.
The second thing that's crazy is, you know, about a sixth of Chinese students.
So, like, Chinese citizens or students in America are on scholarships sponsored by the CCP itself.
And for those on these scholarships, they have to report back to a handler, basically.
What are the things they find?
What are the things they're learning?
Otherwise, their scholarships get revoked.
So we have, there's an incredibly large scale intelligence operation running against the U.S.
tech industry, which is just collecting all the information in secrets and technological secrets
from our greatest research institutions, our universities, our AI labs, our tech companies,
at massive scale.
And honestly, I think this is a very underrated element of how China caught up so quickly.
So, you know, Deepseek came out of nowhere.
Everyone was so surprised at how capable their model was and how they learned all these tricks.
You know, how much of that is because they came up with all of them on their own?
Or they managed to have a, like, exquisite high-end espionage operation to steal all of our trade secrets from the United States and then re-implement them back in China.
What does our espionage look like?
Well, there was a, I think nowhere close to a.
is good. I mean, I think, so one thing that, uh, that, that the CCP did for DeepSeek,
the Deepseek lab is, um, after Deepseek blew up and, uh, and, um, the CEO of Deepseek
met with the Chinese premiere. Uh, they then locked up all the researchers into a, um, uh,
inside, I shouldn't say locked up, but they like huddle all the researchers together and they took
all their passports. So none of the AI researches.
researchers who work a deep seek are able to leave the country at all.
And they can't, they don't come into contact with any foreigners.
So they basically locked down the entire, you know, research effort.
So that, you know, that makes it very, very hard to, to conduct any sort of espionage
into that operation.
And then there's that report, this is all in the news, but like, you know, a decade ago,
15 years ago, all of, or many of the CIA operatives,
US CIA operatives in China were all killed because they were sort of compromised because one of the communication channels they were using was compromised by Chinese intelligence and, you know, the CCP was able to effectively like round a lot of them up and kill them.
So our comparable, their espionage in us is like extremely deep, you know, huge risk.
There's incredible amounts of, you know, we're deeply, deeply penetrated.
by Chinese Intel.
And comparatively, as far as I know, we have, like, you know, much less capability.
And I think they've designed it, such that it's very hard to infiltrate their AI efforts.
Jeez.
So that's other, so they're, you know, they're ahead of us on data.
They're able to catch up through espionage on algorithms pretty easily.
they're head of us on power.
So what are we ahead of it?
Well, right now we're ahead in chips.
And that's kind of our saving grace is that the Nvidia chips and the entire stack there are the pride of the world.
And, you know, we're the most advanced on these chips.
Chinese chips are also catching up.
There's like a bunch of recent reports that Huawei chips are getting to be, they're basically like one generation behind the Nvidia chips.
So they're close.
They're close.
So all of this is pretty concerning.
There was another report that came out of CSIS recently that there was a Chinese effort called
It's like the next generation brain understanding project or something where they're basically trying to use AI to fully understand human personality effectively and human psychological behavior.
years, I imagine that's ultimately for effectively information warfare.
As we were talking about breakfast, like, I mean, China has large-scale information operations,
large-scale information warfare, and has been doing that for decades and, you know, literally
decades, going back all the way to like in-person operations in Hong Kong.
Like, they're so sophisticated at all that, and AI is going to enable them to just move much faster as well.
How do we combat that?
Well, I mean, I think we need our own information operations efforts.
I think that's pretty critical.
That's specifically on that thread.
And I think we need to we need to acknowledge that at the end of the day, you know, we are a more innovative country, but we have to dramatically, you know, get our shit together if we want to win long term in AI.
We need to
We need to onshore ship manufacturing.
We need to be manufacturing huge numbers of chips.
We can't be dependent on Taiwan to manufacture our high-end chips.
Are we doing that yet at any capacity?
Extremely small capacity.
There are a few fabs in Arizona that can produce some chips,
but the vast majority of the volume still comes out of Taiwan.
We need to tighten up security in our in our,
in our AI companies dramatically.
Like we need to have proper counterintel on, you know, what is the espionage risk within
these companies.
You know, solve the power problem that we talked about.
We need to have, we need to be investing into, you know, the cyber threats, like investing
into large-scale cyber defense.
We need to invest into data.
We need our own programs around data dominance to ensure that, you know, China doesn't
just run away with higher and quality and greater AI data sets on us.
So you can go through each of the elements and build the proper plan for the United States to win.
But...
Have you started any of that?
I mean, I think some things are underway, but...
I mean, not enough.
Nowhere close to enough.
To be sure that the U.S. will win, definitely not.
And they also have a fundamental advantage.
You know, one of the things that people say a lot now is like, oh, like, what we need in the United States is an AI Manhattan project where we like, you know, we collect all the brilliant minds together.
We collect our resources and we have one large effort in the United States.
Well, it turns out like it's actually really hard to pull that off in the United States, but China can pull it off super easily.
China can just say, hey, all the best AI people, you now work in one company.
company, we're going to pool together all of your resources.
You are, you all are going to, we're going to put you right next to the largest nuclear
power plant in the, in the world.
Like, we're going to build the largest data center in the world here.
All the chips that China has are going to go towards building this, this like large-scale
AI project.
And they just have the ability to collect all of their resources together and throw it at, at
winning on the AI race.
in the United States, we have all these companies.
And, you know, the United States government, as of yet, like, it's not going to force all
these companies to combine and merge.
Like, that's, that's, like, such an, today would be good as such an overreach of government
power.
But because of that, we're going to have, like, you know, five fragmented AI efforts.
And maybe an aggregate will have way more chips.
And an aggregate will have more power.
And an aggregate will have, you know, more great researchers.
but we're not going to be able to focus those efforts,
whereas China is easily going to be able to focus all their efforts.
Wow.
You had mentioned something downstairs about nuclear weapons, I believe.
Yeah. So this is where stuff gets really weird for national security,
which is you could clearly imagine scenarios where,
advanced, very advanced cyber AI invalidates nuclear deterrence.
What do I mean by this? Right now, you know, nobody fires nukes because we have
mad, we have mutually sure destruction, and if I do a first strike against another country,
they're going to be able to, while that nuke is in the air, do a second strike, and we'll both,
you know, there'll be destruction on both sides. It'll be really bad. So because,
of this second strike capability, luckily, we have a proper, you know, we have real deterrence.
Well, what if instead, let's say, let's say I'm, you know, the United States, and I have the most advanced AI cyberhacking capabilities in the world?
So I can build AI agents that hack into, that can hack into any other country, can, like, turn off their energy grid, can disable their weapon systems.
can disable everything.
So what do I do instead?
I launched the first strike and I meet and or like first I send in my my cyber AI agent
capabilities.
I send my cyber AI, you know, force effectively to disable all the weapon systems of the enemy
country.
And because it's my, I have like such, so much AI capacity, I can take out all.
all of your, I can like disable all of your weapon systems.
And then I send my first strike.
And then you don't have a second strike capability.
So if that happens, basically the combination of AI and nuclear, you know, you cannot deter
AI plus nuclear with just nuclear.
So then it forces this, that's what will force this like proliferation of AI capabilities.
And so even small countries are going to need to invest in lots of AI capabilities because
their nuclear weapons are no longer a sufficient.
deterrent.
Jeez.
What about bio-weapons?
Yeah.
This is the
element that is
really underrated right now.
So, COVID
leaked out of a
virology lab in
Wuhan and basically
shut the world down for two years.
And that's like
that's like the
level one
you know,
bio risk kind of stuff.
Like this was relatively,
a relatively,
you know,
innocuous,
let's say,
pathogen.
But it still killed,
you know,
probably at least 10 million people globally
and it was still,
you know,
shut the whole world down for two years.
Well,
recent models,
new models,
the new AI models,
are able to
outperform
95%
of MIT virologists. So the newest models from Open AI and Google are smarter than literally
95% of
virologists at MIT based on a recent study by the Center for AI Safety.
So now you now whether it's right now or whether it's in a few years
it will be feasible to use AI-based capabilities to help you design
powerful pathogens
And what's more than that, you're going to be able to design in certain characteristics of these pathogens.
You know, you'll be able to tune the virality, tune the lethality of them.
You know, there's also due to recent advances in synthetic biology, you now can create viruses that specifically target certain segments of DNA.
So I could create a bioweapon that just targeted, you know, any,
individual with a certain segment of DNA, which means I can target to basically like any population or any group or any sub segment of the population in the world, which is, which is really, really bad.
And so the ability, so first, even without AI, like, biology, synthetic biology is making so much progress and that there's just like all sorts of inherent risk of, like all sorts of inherent risk.
of bio-weaponry or, you know, leaks of pathogens and viruses and whatnot.
And then with AI all of a sudden, you, this is, you know, not literally today's models,
but a few models a few generations down, you're going to be able to use these AI systems
to design or build, you know, next generation pathogens.
So that's an entire, I mean, for good reason, biological
warfare is not, you know, one of the, is not, you know, their international treaties, such that we
don't engage in biological warfare. But if you imagine these scenarios where countries, you know,
nuclear deterrence doesn't work, they don't have the resources to get to use, to utilize, to have
large-scale AI data centers, you know, I'm worried that countries will, will turn to biological
weaponry, bioweapons, as their deterrence mechanism.
which is highly destabilizing for, you know, the world.
Wow.
That's some scary shit.
The flip side is there is new technology that can,
that can also prevent this stuff.
So there's this research coming out of this lab in Seattle,
David Baker's lab, this guy who just won a Nobel Prize on biological noses,
or digital noses, sorry,
which is basically you have these devices
that can detect
proteins or chemicals
or pathogens in the air
automatically.
And so I think what this will like,
you know, the real sort of like
offense defense of bio and bio-weaponry
will end up looking like, we're just going to have
large-scale deployment of digital noses effectively
that in every space,
on every like shipping container, on every plane, you know, they're just constantly sensing for
all existing known pathogens, any new pathogens that might exist, and are constantly just like,
you know, containing effect or like detecting and ultimately containing the spread.
It's sniffing real time for all of that shit.
Yeah, exactly.
I mean, also on the flip side, I mean, I guess if AI is developing,
a new
bio weapon, COVID comes
out again, COVID too,
we'll just call it.
Then
RAI
should also be able
to figure out the
vaccines or
the vaccine, the antidote to it,
correct?
Yeah, totally.
So there will be,
there will be an offense, defense,
um,
element to
just as,
just as in,
kind of as we're walking through like,
AI applied to command of,
control there's an offense defense element applied to cyber there's an offense defense element
applied to bio and bio weaponry there will be an offense defense element so all of these thankfully
there's like you know the hope is that we end up in a in a in a global world you know the the
world agrees that basically we're not going to go down any of these paths like because there's
mutual deterrence and we just you know it's not worth it for anybody in the world to destabilize
you know and risk humanity like that that's that's basically where we need to land wow how
concerned are you about China Taiwan I mean we're talking about this a little bit of breakfast and
I I can't believe they have not made a move yet I mean I thought for sure it would happen
towards the end of the last administration but people with their chip production capability
I mean, how concerned are you about China taking Taiwan?
I think if it's going to happen, it's going to happen this decade.
And it's probably going to happen this administration.
And why do you say that?
I mean, China, at a macro sense, they have huge demographic issues.
Those are, I mean, there's not like, that's just like the force of gravity in their
country. They have this huge aging population. They made the wrong bet, you know, many decades ago
to have a one-child policy. And so they are going to have this like huge aging population.
Then that plays out really like quite soon. Like over the next, like a decade from now,
it's going to be over time they're going to look more and more like Japan in that way,
where they have this like large aging population and it'll paralyze a lot of ability to make any
sort of aggressive moves. So part of the way comes to military, industrial capacity,
capacity, et cetera. So that's like one force of gravity that they have to contend with. And then,
and so I think, I think they're, they're going to want to move faster sooner rather than later.
And then they've, I mean, they've had such an insane military buildup over the course of the
past few decades. You know, I don't think it's, and I think, you know, we're currently in a
situation where China has far more industrial capacity, far more manufacturing capacity than we do in
the United States. And so that is set, you know, that's a window for them. So do you think they'll
do, they're oppressed to do it because of the aging population? I think a lot of factors. I think,
I think Ghi is aging, right?
This will be an important component of his legacy as he would view it, I think.
They have the aging population, which will minimize their political latitude over time, naturally.
And then they have, I mean, they're in this insane window where they have just incredible
industrial manufacturing capabilities compared to anywhere else in the world.
You know, in 2023,
China deployed more industrial robots
than the rest of the world combined.
That's like, I mean, we were talking a little bit about, like,
automated factories and automated industrials.
Like, they're racing in that faster than any other country in the world.
And so I think that, like, you can look at all these dimensions
and this window, you know, there's, if they're going to do it,
they're going to do it soon.
Yeah.
Yeah.
I mean, what percentage of the chips that we use come from Taiwan?
I mean, 95% of the high-end chips are manufactured in Taiwan.
And so what happens if China takes Taiwan?
So, yeah, we're gaming out.
So we're talking a little bit about this.
So let's say China blockades or invades Taiwan.
then there's a question. So these fabs are incredibly, incredibly valuable because as we're just
describing, if you believe in the pace of AI progress and AI technology, then everything boils
down to how much power you got, how many chips you've got. And if they own 95% of the world's
ship manufacturing capability, I mean, they're going to run away with it. So then you look at that
and you say, will the Taiwanese people bomb the TSM data centers and or will the U.S.
bomb the TSMC data centers and or will some other country bomb the data centers?
Or sorry, not the FABs, the TSMC chip fabs.
I think my personal belief, I don't think the Taiwanese do it because even if they get
blockaded or invaded, those fabs are still a huge component of,
Taiwan's survivability and Taiwan's relevance as a as an entity even if they get block
or invaded by Taiwan.
So I don't think they do it.
China definitely doesn't do it because they are invading partially to get, you know,
to gain those capabilities.
And so then does the U.S. bomb them?
If the U.S. bombs them, that's probably World War III.
it's hard to imagine that not just resulting in massive escalation
and so you're looking at it and
there's kind of no good options
so
I think it's I mean
everyone's very focused on it obviously but it is
it is like a real powder keg
of a
of a region
how do you think this all ends
we added a little discussion about this at breakfast
yeah yeah
I mean
I think
I think if
so let's assume that in the next
handful of years next like three four years
there's an invasion or blockade of Taiwan
and
you know
I think it's I think given how important AI is
it's hard for the U.S. to not take any sort of action in that scenario.
And then, you know, almost all the actions you would see escalating into a major, major conflict.
So the best case scenario is we deter the invasion or blockade altogether.
And I think, you know, I think it certainly is in everyone's interest to not get into a large-scale world.
war that's hugely destructive and kills lots of people. So I think, like, fundamentally,
we should be able to deter that conflict. But that's why all this matters so much. We need to
make sure our AI capabilities as a country are the best in the world. We need to make sure that
our military AI capabilities are the best in the world. We need to make sure that, you know,
there's clear economic deterrence of this kind of scenario. Like, we need to be investing in
in every way to deter this conflict such that, you know, where this really will break down is
if the Chinese, if the CCP calculus, you know, diverges from our own.
If their calculus becomes, oh, no, this is going to work out.
You know, we can take this and then, you know, we're strong enough such that it'll work out
for us.
And then our calculus is the opposite.
That's where, that's where the World War scenario happens.
So I think it's possible to deter.
And I think we have to, you know, there's a lot of things we have to do to make sure that we deter that conflict.
And that should be, I mean, certainly, I think it already is like 80% of the focus of the entire DOD.
I mean, it's just, we can deter, but, I mean, when you're talking about an aging population, I mean, they're getting desperate.
And it sounds like in order for them to legitimately win, they have to acquire.
those chip fabs, correct?
And so they already have 250 times
a shipbuilding capacity, they have way more people.
They have more power than we do.
I mean, military recruitment in the U.S.
was at an all-time low.
I don't know what it is today.
But, I mean, even if it...
So, I guess what I'm saying is
we can only deter a desperate,
an entity for so long before they throw a Hail Mary play, right? Would you agree with that?
Yeah, and then it just depends on their level. We would have to dedicate an entire military to
surround Taiwan to effectively do that, in my opinion. Yeah, I mean, I think that the, if they assess,
if the CCP and the PLA assess that Taiwan is all around, like they will focus their entire
military capacity on seizing Taiwan, then that becomes a really, that becomes a really tricky
calculus.
I mean, why wouldn't they?
If, if, if, if, if, if, if, if he believes that the winner of the AI race achieves
global domination, he's getting older.
You'd just talked about how important his legacy is to him, which I'm sure you're
right.
I don't know how you deter that.
and then they win the AI race.
Yeah, the only thing that we can do,
I think this is a long job,
but I think it's important,
is if ultimately
we actually end up collaborating on AI.
And I know that sounds kind of crazy,
but if we're able as a country
to demonstrate
just we're so far ahead
and there's like, you know, one key element of how the whole AI thing plays out
is this idea of AI self-improvement or intelligence recursion, sometimes people call it,
but basically once AIs get sufficiently good, then you can start utilizing the AI's to help you build the next AI.
As sci-fi as that sounds, you utilize your current generation AI to build the next generation AI faster,
and faster and faster and faster and faster.
And so at some point,
your AI capabilities enable you,
like, you know, there's some form of, like,
you know, just exponential takeoff.
They just, you know, your AI capabilities
get good really, really quickly.
And if somebody's even three to six months behind you,
then they're never going to catch up to you
because you're running this self-improvement loop
faster than anybody else.
And so this is a,
This is a key idea.
I mean, it's, I think it's, it's a little bit theoretical right now.
Like, it's not clear whether or not this intelligence recursion is going to be how it plays out,
but a lot of people in AI believe it.
And I probably, I probably believe it, too, that we will be able to use AI's to help us
continue training the next AI's and improve things more quickly.
And if you believe that, then if we're, let's say, three to six months ahead of China,
and we maintain that advantage and we take off faster,
then they're going to be way behind.
And then ultimately, we're going to be in a great position to say,
hey, actually, like, we're way ahead and we should just, you know,
you guys should quit your efforts.
We'll give you AI for all of your economic and humanitarian uses throughout your society.
And we agree we're not going to battle on military AI.
What would it take to take the chip building capabilities that Taiwan has and implement that here in the U.S. to protect it?
So, yeah.
So the first thing is there's been hundreds of billions of dollars invested just into the buildout of those things.
fabs and the the the the the called foundries but the buildup of these of these large scale chip factories effectively and the and all the high-end equipment and tooling inside of them hundreds of billions of dollars of investment so
first off there needs to be hundreds of billions of dollars investment in the u.s that's not the hard part the second part
that's that's really the hard part is all it's basically a large-scale factory operated by
highly, highly skilled
workers
who are very experienced
in those processes, and the whole thing
operates like a, you know, like clockwork.
And unless you can get those people
to the U.S., you know, you're going to have to
like rebuild all that know-how and all that technical
capability. And that's what takes a really
long time. And that's one of the things, you know.
So why do you think we haven't
done that. Why do you think we have not incentivized these brilliant minds to come here and do it for us?
So TSM, the Taiwan Semiconductor, the company that, you know, build these fabs, they have stood up
a few fabs in Arizona. But they cited issues. First, there were issues around permitting and getting
enough power and they dealt with some EPA issues. And then they just have issues where the like,
you know, the technicians working in Arizona don't aren't a skill or don't work as hard as those
working in Taiwan. So they've built a few fabs in the United States. So they've tried to do it,
but our red tape and our power is not what it needs to be to be able to do this. Red tape.
power
workforce
and then there's
another key thing
which is
if you look at it
from Taiwan
semiconductor
from TSM's perspective
they're not all that
incentivized
to stand up
all these capabilities
in the United States
like if
as soon as they start
standing up
all these capabilities
in the United States
the United States
is not
incentivized
to defend Taiwan
yeah
and it's a
Taiwanese company
and it's a critical part of their survival strategy.
So that's really where the rubber hits the road is.
Are they actually incentivized to do a large-scale build-out of chip manufacturing capacity in the United States?
I think the answer is like, no.
Makes sense.
I mean, there would have to be some type of a deal struck where they fall under our wing.
Yeah, I mean, you can imagine some.
kind of deal with with China between the US and China it'd have to be like a diplomatic
deal at the highest levels which is something along the lines of you know hey you guys
can have Taiwan but we need large-scale fabs in you know we need large-scale chip
manufacturing in the United States or something like that and like you know maybe
there's worlds where that kind of deal could get could get drawn up I don't
know.
But that would, I mean, that would also mean that the United States would just have to say, hey,
well, all we care about actually at this point is chip manufacturing.
And that we don't care actually about the time of these people and the country and all that
stuff.
Man.
Man.
And are they working with China at all?
The TSMC?
Yeah.
So they're, I think they're technically not supposed to, but a lot of, the, Huawei, one of the leading companies in China, has been able to get tons of chips from tons of dyes, it's called, but basically tons of chips or chip prerequisites from Taiwan.
and they usually do it through like
they like start some cutout company
that doesn't seem associated with them
in like Singapore
and then that Singaporean company buys a bunch of
or Malaysia
or the Singapore and Malaysian companies buy a bunch of
chips from TSM and then they mail it back or something
but there's clearly been
there's been a lot of TSMC
high end
outputs that have gone to
to the Chinese companies
Wow
wow
scary shit man
it gets
I mean I think this is where
you have to believe
like right now if you look at the
you know just as we were right now
like if you look at the
situation and all of the
all the dynamics at play
right now
it's it's like it's a powder cake
it's like very very
very volatile
highly problematic in many ways
And this is where, I mean, you just ultimately have to believe that there's got to be some effort towards diplomatic solutions.
Yeah.
Because it is definitely true.
Like, war will be really bad for both sides.
Yeah.
Yeah.
How do we coordinate with China with the AI?
Yeah.
Yeah.
So, what does that look like?
So, yeah, right now, right now we're definitely, um, U.S.
in China, we're definitely in an all-out race dynamic. And, you know, we're going to race,
and I think this is correct, we're going to race to build the best AI systems. They're going
to race to build the best AI systems. And we're both all in on this approach. And we're
both all in on racing towards building the most advanced AI capabilities, the largest data
centers, largest capacity, et cetera, et cetera.
The,
and this is, if,
you know, if you recall, kind of how
how nuclear was.
Like, you know, in nuclear
war, as well as
application of nuclear towards
towards
power production, it was kind of,
you know, all systems go,
like everyone racing,
towards building capacity, building capability.
And then
Trinople and Three Mile Island happen, and it creates large-scale consternation around the technology
and the risks of those technologies.
And there were a bunch of international treaties, and there was a large international response
towards coordinating on nuclear technology.
Now, all of a sudden done, if you really, you know, if you look at nuclear, like, that set
our country back, set many countries back, you know, many generations in terms of power generation.
But what it took was effectively these like small scale disasters to take place that effectively
were the forcing function for international cooperation.
You can you can imagine a scenario with AI where because of all the things that we've been
talking about.
There's some scenario where maybe some terrorist group or some non-state actor or some,
you know, North Korea or whomever, somebody decides to use it for in a particularly
adversarial or, you know, inhumane way and create, and that disaster has some large-scale
fallout.
So it creates, you know, you take out the, the, the, you.
you take out power in
like one of the largest cities in the world
and tons of people die
or you take out or there's some
pathogen that gets released and like tens of millions
of people die or you know some one of these things
happens that causes the international community
and everyone in the world to realize
oh shoot we have to be coordinating on this
and you know we should be collaborating for AI
to improve our societies
and improve our economies
and improve the lives of our people
but we shouldn't, you know, we need to coordinate on its use towards, for lack of a better term,
scary things like bio or cyber warfare or, you know, the list goes on.
So long or short, I think the path really is some kind of, you know, sometimes we talk about
as like an AI oil spill or some kind of incident that really causes the international community
realize, like, hey, we have to start coordinating.
on this. I mean, you say
China's all out
gone all in on the race day ice
and the U.S. is going all out on the race day,
but we're kneecapping ourselves.
I mean, you just mentioned the red tape,
the EPA,
permitting,
and the power.
And we're not producing more power.
We're flat-blinded. We've established that.
As far as I know, we're not getting rid of the
red tape, you know, to
to jet launch this
and
it just seems like
we're cutting ourselves off at the knees here.
Right, right now,
I mean,
we have a lot of work to do, for sure.
We have to build strategies
to have energy dominance,
to have data dominance,
to, on the algorithms,
I think we'll be okay.
They're going to espionage,
but I think we'll be okay on algorithms.
We need to ensure we have chip dominance long term.
We need to make sure
all this lends itself to military dominance.
I totally agree with you.
I mean, we need to,
we need to today ensure that we have the proper strategies in place
so that we stay ahead on all these areas.
The worst case scenario for the United States is the following,
which is CCP does a large-scale Manhattan-style project
inside their country,
realizes they can start.
start because of all the factors that we've talked about, they realize they can start overtaking
the US on AI. That lends itself to extreme hyper-military advantage, and they use that to
take over the world. That's like, that's like worst-case scenario for the US. If US and AI,
US and China AI capabilities are even just roughly on par, I think you have deterrence. I don't
think either country will take the risk. I think if US is,
Way ahead of China, I think you maintain U.S. leadership.
And that's a pretty safe world.
So the worst case scenario is they get ahead of us.
Are there any other players other than the U.S. and China involved in this?
Who else do we need to be watching out for?
So, yeah, right now, definitely U.S. and China.
The, a lot of other countries will matter.
but not all of them have enough ingredients to really properly be AI superpowers.
But other countries are going to, they have key ingredients.
So to name a few, A, everything we've talked about with cyber warfare and information warfare, information operations, Russia has very advanced operations in those areas.
And that could end up mattering a lot if they ally with the CCP.
There's a lot of ways they can team up and have, and that could be pretty bad.
There's, you know, the countries in the Middle East will be very important because they have incredible amounts of capital and they have lots of energy.
and so
these are
they're critical
players in how all this plays out
India
matters a lot
India has a lot of
high-end technical talent
I don't know if
I think right now
I don't know if between India and China
which has more high-end technical talent
but there's a lot in India for sure
massive population
also starting to industrialize
in a real way
and right next to China.
So India will matter a lot.
And then, you know, there's a lot of,
there's a lot of technical talent in Europe as well.
I think it's unclear exactly how this plays out
with the European capabilities.
I mean, they have to,
it seems like there's some efforts now
for Europe to try to build up large-scale power,
build up large data centers,
you know,
make a play.
I think yet to be seen
how effective those efforts are going to be,
but you can clearly see some scenarios
where if they make a hard turn
and go all in,
they could be relevant as well.
Is there a world where AI takes out of mind of its own?
So,
you know, obviously you can hypothetically
paint the scenario where like, you know,
you have super intelligence or you have really powerful AI and then
you know it realizes at some point that humans are kind of annoying and takes us all out
but but I think I think it's a very like that's so preventable as an outcome because
first of all all the things we just talked about are like the very real things that
happened long before you have you know this hyper advanced AI that takes everyone out
that's first thing.
So we have lots of things we have to get right before then.
And then second is, you know, for AI to actually be capable of, you know, having mind of its own and taking all humans out, like, we'd have to give it just incredible amounts of control.
Like, it would have to just basically be running everything and we're just sort of like along for the ride.
and that's a choice.
We have this choice of whether or not to, like,
give all of our control to AI systems.
And as I was talking about before, like, human sovereignty,
my belief is we should not see control
of our most critical systems.
Like, we should design all the systems
such that human decision making, human control,
is really, really important.
Human oversight is really important.
This is one of the things that I actually think
is one of the things that we're working on as a company.
Honestly, one of, like, as I think about, like, long-term missions,
one of the most important things is creating human sovereignty.
So first is, how do we make sure all the data that goes into these AI models
increases human sovereignty, such that the models are going to do what we tell them
are aligned with humans and aligned with our objectives.
And two is that we create oversight.
So as AI starts doing more and more actions, doing more planning, you know, taking out, you know, carrying out more things in the in the world, in the economy, in military, et cetera, that humans are watching and supervising every one of those actions.
So that's how we maintain control.
And that's how we prevent, you know, the Terminator scenarios or the, you know, AI takes us out kind of scenarios.
Interesting.
Well, Alex, wrapping up the interview here, but man, what a fascinating discussion.
Thank you.
Thank you for being here.
One last question.
If you had three guests, you'd like to see on the show, who would it be?
Oh, that's a good question.
Who would I like to see?
Well, I really like what you've been doing recently, which is getting more tech folks on the pod.
so
I go in that direction
I mean I think Elon would be great to see on the show
I think
I think
we were talking about this
Zuck would be cool to see on the show
I think
Sam Haltman would be cool to see on the show
so definitely like
more people in tech
outside of that
I think
and we're talking about some of this
like international leadership, like international, like leaders of other countries,
is super important because we talk about all these scenarios, like international cooperation
is going to matter so much.
Right on.
We'll reach out to them.
And, yeah, as far as world leaders is concerned, we're on it.
But, well, Alex, thanks again for coming, man.
Fascinating discussion.
I'm just super happy to see all the success that you've amassed throughout your 28 years.
It is, I love saying it.
So thank you for being here.
I know you're a busy guy.
Yeah, thanks for having me.
It was fun.
