The Joe Rogan Experience - #2311 - Jeremie & Edouard Harris
Episode Date: April 25, 2025Jeremie Harris is the CEO and Edouard Harris the CTO of Gladstone AI, a company dedicated to promoting the responsible development and adoption of artificial intelligence. https://superintelligence.gl...adstone.ai/ Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
The Joe Rogan Experience
Trained by day, Joe Rogan podcast by night, all day!
Alright, so if there's a doomsday clock for AI and we're fucked, what time is it? If midnight is, we're fucked.
We're going right into it.
You're not even going to ask us what we had for breakfast
Let's get freaked out Well, okay. So there's one without speaking to like the fucking tombstay dimension right out there
There's a question about like where are we at in terms of AI capabilities right now?
And what do those timelines look like right? There's a bunch of disagreement
One of the most concrete pieces of evidence that we have recently came out of a lab, an AI kind of evaluation lab called METER, and
they put together this test. Basically, it's like you ask the question, pick a task that
takes a certain amount of time, like an hour. It takes like a human a certain amount of
time. And then see how likely the best AI system is
to solve for that task.
Then try a longer task.
See like a 10 hour task, can it do that one?
And so right now what they're finding is
when it comes to AI research itself,
so basically like automate the work of an AI researcher,
you're hitting 50% success rates for these AI systems
for tasks that take an hour long.
And that is doubling every, right now it's like every four months. So like systems for tasks that take an hour long and that is doubling every right now
It's like every four months. So like you had tasks that you could do, you know a person doesn't five minutes like you know
ordering a new bar eats or like something that takes like 50 minutes like maybe booking a flight or something like that and it's a
Question of like how much can these AI agents do right like from five minutes to 50 minutes to 30 minutes.
And in some of these spaces like research,
software engineering,
and it's getting further and further and further
and doubling it looks like every four months.
So like, yeah.
So if you extrapolate that,
you basically get to tasks that take a month to complete.
Like by 2027,
tasks that take an AI researcher a month to complete,
these systems will be completing with like a 50% success rate
So you'll be able to have an AI on your show and ask it what the doomsday clock is like by then
Probably won't laugh
It'll have a terrible sense of humor about it, but just make sure you ask you what had for breakfast before you starting
What about quantum computing getting involved in AI?
So yeah, honestly, I don't think it's, if you think that you're going to hit human-level AI
capabilities across the board, say 2027, 2028, which when you talk to some of these,
the people in the labs themselves, that's the timelines they're looking at. They're not
confident. They're not sure, but that seems pretty plausible.
If that happens, really there's no way
we're gonna have quantum computing
that's gonna be giving enough of a bump to these techniques.
You're gonna have standard classical computing.
One way to think about this is that the data centers
that are being built today are being thought of
literally as the data centers that are going to house
like the artificial brain that powers super intelligence,
human level AI when it's built in like 2027
and something like that.
So how knowledgeable are you
when it comes to quantum computing?
So a little bit, I mean, like I did my grad studies
in like the foundations of quantum mechanics.
Oh, great.
Yeah, well, it was a mistake,
but I appreciate it for the purpose of the-
Why was it a mistake?
You know, so, academia is this kind of funny thing.
It's really bad culture.
It teaches you some really terrible habits.
So, basically my entire life after academia,
and Ed's too, was unlearning these terrible habits of,
it's all zero sum, basically.
It's not like when you're working in startups. It's not like when you're working in startups it's not like you know when you're working in tech where
you know you build something and something somebody else build something
that's complimentary and you can team up and just like make something amazing
it's always wars over who gets credit who gets their name on the paper did you
cite this fucking stupid paper from two years ago because the author has an ego
and you got to be on I was literally at one point, I'm not gonna get any details here, but like there was a
collaboration that we ran with like this, anyway, fairly well-known guy and my
supervisor had me like write the emails that he would send from his account so
that he was seen as like the guy who is like interacting with this big wig. That kind of thing is like,
doesn't tend to happen in startups, at least not in the
same way because everybody's
wanted credit for the like he wanted just seemed like he was
the genius who was facilitating this
for sounding smart on email. Right? But that that happens
everywhere. And yeah, the reason it happens is that these guys
who are like professors or even not even professors,
just like your post-doctoral guy who's like supervising you,
they can write your letters of reference
and control your career after that lab.
So they can do whatever.
And so what you're doing.
It's like a movie.
Totally.
It's gross.
It's disgusting.
It's like a gross boss in a movie
that wants to take credit for your work, and it's real
It's rampant and the way to escape it is to basically just be like fuck this. I'm gonna go do my own thing
Yeah, share dropped out of grad school to come start a company and and I mean honestly even that
It took it took me it took both of us like a few years to like unfuck our brains and unlearn the bad habits
We learned it was really only a few years later that we started like really, really getting a good
like getting a good flow going.
You're also you're kind of disconnected from the like base reality when you're in the ivory
tower right?
Right.
If you're there's something beautiful about and this is why we spent all our time in startups
but there's something really beautiful about like it it's just a bunch of assholes, us, and like no money, and nothing, and a world of like potential customers, and
it's like, you actually, it's not that different from like stand-up comedy in a way, like your
product is, can I get the laugh, right?
Like something like that, and it's unforgiving.
If you fuck up, it's like silence in the room, it's the same thing with startups, like the
space of products that actually works
is so narrow and you've got to obsess
over what people actually want.
And it's so easy to fool yourself into thinking
that you've got something that's really good
because your friends and family are like,
oh no, sweetie, you're doing a great job.
Like what a wonderful life.
I would totally use it.
I totally see all that stuff, right?
And that's, I love that because it forces you to change.
Yeah, the whole indoctrination thing in academia stuff right and that's I love that because it forces you to change mm-hmm yeah the
the whole indoctrination thing in academia is so bizarre because there's these like
hierarchies of powerful people and they just the idea that you have to
Work for someone someday and they have to take credit by being the the person on the email that that will haunt me for days.
I swear to God, I'll be thinking about that for days now. I fucking can't stand people like that.
It drives me nuts.
One big consequence is it's really hard to tell who the people are who are creating value in that space, too, right?
Of course, sure, because this is just like television.
One of the things about television shows is, so I'll give you an example, a very good friend of mine who's very famous comedian had this show and
His agent said we're gonna attach these producers. It'll help get it made and
He goes well. What are they gonna? Do he goes? They're not gonna do anything
It's just be in name he goes, but they're gonna get credit. He goes. Yeah, he goes fuck that he was none
No, listen listen. This is better for the show. It'll help the show
Okay, but then they'll have excuse me then they'll have a piece of the show.
He's like, yes, yes, but it's a matter
of whether the show gets successful or not,
and this is a good thing to do.
And he's like, what are you talking about?
But it was a conflict of interest
because this guy, the agent,
was representing these other people.
But this is completely common.
So there's these executive producers
that are on shows that have zero to do with it it's so many yeah so many industries are like
this and that's why we got into startups it's it's literally like you and the
world right yeah it's like in a way like stand-up comedy like Jerry said
podcasting or like podcasting where your enemy isn't actually hate it's in
difference like most of the stuff you do especially when you get started like why would anyone like give a shit about you? They're just not going to pay
attention. Yeah, that's not even your enemy, you know, that's just all potential. That's all that
is, you know, like your enemies within you. It's like figure out a way to make whatever you're
doing good enough that you don't have to think about it not being valuable. It's meditative,
like there's no way for it not to be in some way a reflection of yourself.
You're kind of like in this battle with you trying to convince yourself that you're great,
so the ego wants to grow and you're constantly trying to compress it and compress it.
And if there's not that outside force, your ego will expand to fill whatever volume is given to it.
If you have money, if you have fame, if everything's given,
and you don't make contact with the unforgiving on a regular basis,
you're going to end up doing that to yourself and you could yeah
It's possible to avoid but you have to have strategies. Yeah, you have to be intentional about it. Yeah, the best strategy is jujitsu
Yeah, it's it's a mark Zuckerberg is a different person now. Yeah. Yeah, you can see it. You can see it
Yeah, well, it's a really good thing for people that have too much power because you just get strangled all the time
Yeah, and then you just get your arms bent sideways and you after a while you're like, okay
This is reality. This is reality this social hierarchy thing that I've created is just nonsense
It's just smoke and mirrors and they know it is which is why they so rabidly enforce these
Hierarchies the best people seek it out.
They know like why sir and ma'am and all that kind of shit.
That's what it is.
Like you don't feel like you really have respect unless you say that.
Ugh.
These poor kids that have to go from college where they're talking to these dipshit professors
out into the world and operating under these same rules that they've been like forced
and indoctrinated to.
God, to just make it on your own.
It's amazing what you can get used to, though.
And it's funny, you were mentioning the producer thing.
That is literally also a thing that happens in academia.
So you'll have these conversations where it's like,
all right, well, this paper is fucking garbage or something,
but we wanna get it in a paper, in a journal.
And so let's see if we can get a famous guy
on the list of authors so
that when it gets reviewed, people go like, Oh, Mr. So-and-so, okay, like, and that literally
happens, like, you know.
The funny thing is, like, the hissy fits over this are like the stakes are so brutally low,
at least with your producer example, like someone stands to make a lot of money. With
this, it's like, you get maybe like an assistant professorship out of it at best.
That's like 40 grand a year.
And it's just like, it's just why, what do you do?
For the producers, it is money.
But I don't even think they notice the money anymore.
I think a big part, because all those guys
are really, really rich already.
I think if you're a big time TV producer, you're really rich.
I think the big thing is being thought of as a genius who's always connected to successful projects right yeah they
really like that's that is like always gonna be a thing right it wasn't one
producer it was like a couple so there's gonna be a couple different people that
were on this thing that had zero to do with it it was all written by a stand-up
comedian his friends all helped him they all put it together and then he was like no you want a firing his agent over it
Oh, yeah, good for him. I mean, yeah, I get the fuck out of here at a certain point for the producers, too
It's kind of like you'll have people approaching you for help on projects that look nothing like projects. You've actually done
So I feel like it just it just adds noise to your your universe
Like if you're actually trying to build cool shit, you know what I mean? Some people just want to be busy.
They just want more things happening and they think more is better.
More is not better because more is energy that takes away from the better, whatever
the important shit is.
Yeah, the focus.
You only have so much time until AI takes over.
Then you'll have all the time in the world because no one will be employed and everything
will be automated.
We'll all be on universal basic income.
And that's it.
That's a show. The sitcom,
which poor people existing on $250 a week. Oh, I would watch that. Yeah.
Cause the government just gives everybody that. That's what you live off of.
Like weird shit is cheap. Like the, the stuff that's like all like, well,
the stuff you can get from chat bots and AI agents is cheap,
but like food is super expensive or something
Yeah, the organic food is gonna be you're gonna have to kill people for it. You will eat people it will be like a soylent world, right?
Nothing's more free range than people though. That's true depends on what they're eating though. It's just like animals
You know, you don't know I eat a bear that's been eating salmon. They taste like shit. Yeah, I didn't know that
Yeah, I've been eating salmon. They taste like shit. Yeah, I didn't know that yeah
I've been eating my bear wrong this entire time
So back to the quantum thing
So quantum computing is infinitely more powerful than standard computing
Would it make sense then that if quantum computing can run a large language model that it would reach a level
of intelligence that's just preposterous?
So yeah, one way to think of it is like, there are problems that quantum computers can solve
way way way way better than classical computers.
And so like, the numbers get absurd pretty quickly.
It's like problems that a classical computer couldn't solve if it had the entire lifetime
of the universe to solve it, a quantum computer right in like 30 seconds, boom.
But the flip side, there are problems that quantum computers just can't help us accelerate.
The kinds of like one classic problem that quantum computers help with
is this thing called like the traveling salesman paradox or problem,
where you have like a bunch of different locations that a salesman needs to hit,
and what's the best path to hit them most efficiently.
That's like kind of a classic problem
if you're going around different places
and have to make stops.
There are a lot of different problems
that have the right shape for that.
A lot of quantum machine learning, which is a field,
is focused on how do we take standard AI problems,
like AI workloads that we wanna run,
and like massage them into a shape
that gives us a quantum advantage and
That's it's a nascent field. There's a lot going on there
I would expect like my personal expectation is that we just build the human level AI and very quickly after that super intelligence
Without ever having a factor in quantum, but it's a you can define that for people
What's the difference between human level AI and super intelligence? Yeah
so yeah human level AI is like AI you can imagine like it's AI that is as
Smart as you are in let's say all the things you could do on a computer
So, you know you can yeah, you can order food on a computer
But you can also write software on a computer. You can also email people and pay them to do shit on a computer.
You can also trade stocks on a computer.
So it's like as smart as a smart person for that.
Superintelligence, people have various definitions and there are all kinds of like honestly hissy
fits about like different definitions.
Generally speaking, it's something that's like very significantly smarter than the smartest
human.
And so you think about it, it's kind of like,
it's as smart, as much smarter than you
as you might be smarter than a toddler.
And you think about that, and you think about like the,
you know, how would a toddler control you?
It's kind of hard.
Like you can outthink a toddler pretty much
like any day of the week.
And so super intelligence gets us at these levels It's kind of hard. Like, you can outthink a toddler pretty much like any day of the week.
And so superintelligence gets us at these levels where you can potentially do things
that are completely different.
And basically, you know, new scientific theories.
And last time we talked about, you know, new stable forms of matter that were being discovered
by these kind of narrow systems.
But now you're talking about a system that is like, has that intuition combined with the ability to talk to you as
a human and to just have really good like rapport with you, but can also do math, it
can also write code, it can also like, solve quantum mechanics and has that all kind of
wrapped up in the same package.
One of the things too that by definition,
if you build a human level AI,
one of the things it must be able to do as well as humans
is AI research itself.
Or at least the parts of AI research
that you can do in just like software,
like by coding or whatever these systems are designed to do.
And so one implication of that
is you now have automated AI researchers.
And if you have automated AI researchers, that means you have AI systems that can automate the development of the next level of their own capabilities.
And now you're getting into that whole singularity thing where it's an exponential
that just builds on itself and builds on itself, which is kind of why, you know, a
lot of people argue that, like, if you build human level AI, super
intelligence can't be that far away, you've basically
unlocked everything.
Because we kind of have gotten very close, right? Like it's,
it's past the the Fermi, not the Fermi paradox, the what is
it? Oh, yeah, the the
we're just talking about them. Yeah, the test the
all the Turing test during Turing. Yeah, thank you. We're just talking about him the other yeah the test the oh the Turing tests during touring. Yeah
Thank you. We're just talking about a horrible
What happened to him was you know the yeah chemically castrated him because he was gay yeah
Horrific minds of killing himself the guy who figures out
What's the test to figure out whether or not a eyes become sentient and by the way does this in like what?
1950s oh yeah, yeah Alan Turing is like he the guy was a beast
Right. How did he think that through he invented even know he invented basically the concept that underlies all computers
Like he was like an absolute beast. He was a code breaker that he broke the Nazi codes, right?
And he also wasn't even the first person to come up with this idea of machines, building machines, and there being implications like human disempowerment. So if you go back to, I
think it was like the late 1800s, and I don't remember the guy's name, but I
sort of like came up with this, he was observing the Industrial Revolution and
the mechanization of labor and kind of starting to see more and more like if you
zoom out it's almost like you have an, humans are an ant colony and the
artifacts that that colony is producing that are really interesting are these machines
You know you kind of like look at the surface of the earth is like gradually increasingly
Mechanized thing and it's not super clear if you zoom out enough like what is actually running the show here
Like you've got humans servicing machines humans looking to improve the capability of these machines at this frantic pace like they're not even in control
What they're doing or we economic forces are pushing we the servant of the machines at this frantic pace. Like they're not even in control of what they're doing. Economic forces are pushing.
Are we the servant of the master, right?
At a certain point, like, yeah.
And the whole thing is like, especially with a competition that's going on, um,
between the labs, but just kind of in general, you're at a point where like, do
the CEOs of the labs, like they're, they're these big figureheads.
They, they go on interviews, they talk about what they're doing and stuff.
Do they really have control over the over any part of the system?
Because like-
The economy's in this almost convulsive fit, right?
Like you can almost feel like it's hurling out AGI.
And as one kind of, I guess, data point here,
like all these labs, OpenAI, Microsoft, Google, every year
they're spending like an aircraft carrier worth of capital individually, each of
them, just to build bigger data centers, to house more AI chips, to train bigger
more powerful models, and that's like, so we're actually getting to the point
where if you look at on a power consumption basis, like we're getting to
you know, two, three, four, five percent of US power production
if you project out into the late 2020s.
Kind of 2026 27.
You're talking about not for double digit though, not for double digit, but for single
digit.
Yeah, you're talking like that's a few gigawatts of one gigawatt.
Not for single digit.
It's in the like, for 2027, you're looking at like, you know, in the point five ish percent, but it's like, it's a big fucking frat like you're talking about gigawatts and gigawatts one gigawatt is a million homes.
So you're seeing like one data center in 2027 is easily going to break a gig, there's going to be multiple like that. And so it's like 1000 sorry, a million home city metropolis really, that is just dedicated to training like one fucking model
That's what this is again if you zoom out at planet earth you can interpret it as like this
Like all these humans frantically running around like ants just like building this like artificial brain
mind assembling itself on the face of the planet Marshall McLuhan in like
1963 or something like that said human beings are the sex organs of the machine world
Oh, that hits that hits different today. Yeah
Does I've always said that if we were aliens or if aliens came here and studied us they'd be like
What is the dominant species on the planet doing? Well, it's making better things. That's all it does
It's not the whole thing
It's dedicated to making better things and all of all it does. The whole thing is dedicated to making better things.
And all of its instincts, including materialism, including status, keeping up with the Joneses,
all that stuff is tied to newer, better stuff. You don't want old shit. You want new stuff.
You don't want an iPhone 12. What are you doing, you loser? You need newer, better stuff.
And they convince people, especially in the realm of like consumer electronics
Most people are buying things absolutely don't need the vast majority of the spending on new phones is completely unnecessary
Yeah, but I just need that extra like that extra like fourth camera though
My life isn't I run one of my phones is an iPhone 11 And I'm purposely not switching it just to see if I notice it. I fucking never notice anything
I've watched YouTube on it. I text people. It's all the same. I go online it works
It's all the same probably the biggest thing there is gonna be the security side, which um no they update the security
It's all software, but I mean if you're if your phone gets old enough
I mean like at a certain oh when enough, I mean, like at a certain point,
Oh, when they stop updating it? Yeah, like iPhone one, you know,
China's watching all your dick pics.
Oh, dude. I mean, Salt Typhoon, they're watching all our dick pics.
Yeah, they're definitely seeing mine.
What's Salt Typhoon?
So Salt Typhoon, oh, sorry. Yeah, yeah.
So it's this big Chinese cyber attack actually starts to get us to
to kind of the broader.
Great name, by the way, Salt Typhoon.
You know, yeah, guys, I really wish they would name it. They a great name by the way, salt typhoon. Fuck yeah guys.
I really wish they would name it.
They have the coolest names for their cyber operations
meant to destroy us.
That's a great name.
Salt typhoon's pretty slick.
You know what it's kind of like when people go out
and do like an awful thing, like a school shooting
or something and they're like, oh, let's talk about,
if you give it a cool name, like now the Chinese
are definitely gonna do it again.
Anyway, that's-
Because they have a cool name.
Yeah, that's definitely a factor typhoon. I food pretty dope
Yeah, but it's this thing where basically so so there was in the the 3g kind of protocol that was set up years ago
Law enforcement agencies included back doors intentionally to be able to access comms
you know theoretically if they got a warrant and so on and
Well, you introduce a back door you have adversaries like China who are wicked good at cyber
They're gonna find and exploit those back doors
And now basically they're they're sitting there and they had been for some people think like maybe a year or two before it was really
Discovered and just a couple months ago. They kind of go like oh cool
Like we got fucking like China all up in our shit
and this is like this is like flip a switch for them and like you turn off the power water to a state or like
You fucking yeah. Well, sorry. This is sorry salt typhoon. That was about just
Sitting on the the like basically telecoms. Well, that's the telecom. Yeah, it's not that but but yeah, I mean that's another there's another
There's another thing where they're doing that too. Yeah, and so this is kind of where
what we've been looking into over the last year
is this question of how, what is,
if you're gonna make like a Manhattan project
for super intelligence, right?
Which is, I mean, that's what we were texting about
like way back and then actually funnily enough,
we shifted our date for security reasons,
but if you're gonna do a Manhattan project
for super intelligence,
what does that have to look like? What does the security game have to look like to actually make it so that China's not all up in your shit? Like today, it is extremely clear that at the world's
top AI labs, like all that shit is being stolen. Like there is not a single lab right now that
isn't being spied on successfully based on everything we've seen
Um by the chinese can I ask you this are we spying on the chinese as well? That's a big problem
Do you want to we're we're we're I mean, we're definitely we're definitely doing some stuff. Um,
But in terms of the the relative balance between the two we're not where we need to be
They spy on us better than we spy on them.
Yeah, because like we, because like our-
They build all our shit.
They've spilled all our shit.
Well, that was the Huawei situation, right?
Yeah, and it's also the, oh my God, it's the,
like if you look at the power grid,
so this is now public,
but if you look at like transformer substations,
so these are the essentially, anyway,
they're a crucial part of the electrical grid.
And there's really like
Basically all of them have components that are made in China
China is known to have planted back doors like Trojans into those substations to fuck with our grid
The thing is when you see a salt typhoon when you see like big Chinese cyber attack or a big Russian cyber attack
You're not seeing their best these countries do not go and show you
like their best cards out the gate.
You show the bare minimum that you can
without tipping your hand at the actual exquisite
capabilities you have.
Like the way that one of the people kind of
who's been walking us through all this really well
explained it is like, the philosophy is you want to learn
without teaching, right? You want to use what is the lowest level capability that has the
effect I'm after.
So I'll get I'll give an example. Like I'll tell you a story that's that's kind of like,
it's a public story and it's from a long time ago, but it kind of gives a flavor of like
how far these countries will actually go when they're playing the game for fucking real. So it's 1945. America
and the Soviet Union are like best pals because they've just defeated the Nazis, right? To
celebrate that victory and the coming new world order that's going to be great for everybody,
the children of the Soviet Union give us a gift to the American ambassador in Moscow this beautifully
carved wooden seal of the United States of America beautiful thing ambassadors
thrilled with it he hangs it up on behind his desk in his private office you
can see where I'm going with this probably but yeah seven years later 1952
finally occurs to us like let's take a town and actually examine this.
So they dig into it, and they find this incredible
contraption in it called a cavity resonator.
And this device doesn't have a power source,
doesn't have a battery, which means when you're sweeping
the office for bugs, you're not gonna find it.
What it does instead is it's designed, that's it, that's it.
It's beautiful.
They call it the thing.
And what this cavity resonator does
is it's basically designed to reflect radio radiation
back to a receiver to listen to all the noises
and conversations and talking
in the ambassador's private office.
And so-
How's it doing it without a power source so that's what they do so the Soviets for
seven years parked a van across the street from the embassy had a giant
fucking microwave antenna aimed right at the ambassador's office and we're like
zapping it and looking back at the reflection literally listening to every
single thing he was saying and the best part was
When the embassy staff was like we're gonna go and like sweep the office for bugs periodically. They'd be like, hey, mr Ambassador we're about to sweep your office for bugs and the ambassador was like cool
Please proceed and go and sweep my office for bugs and the KGB dudes in the van were like just turn it off
Sounds like they're gonna sweep the office for bugs
Let's turn off our giant microwave antenna, and they kept at it for seven years
It was only ever discovered because there was this like British radio operator who was just you know doing his thing changing his dial
He's like oh shit like is that the ambassador fucking talk randomly so so the thing is oh and actually sorry
One other thing about that if you heard that story, and you're kind of thinking to yourself, hang on a second, they were shooting
like microwaves at our ambassador 24 seven for seven years.
Whoa.
Doesn't that seem like it might like fry his genitals
or something?
Yeah, or something like that.
You're supposed to have a lead vest.
And the answer is, yes.
Yes.
Yes, and this is something that came up in our investigation
just from every single person who was filling us in
and who dialed in and knows what's up.
They're like, look, so you gotta understand,
like our adversaries, if they need to give you cancer
in order to rip your shit off of your laptop,
they're gonna give you some cancer. Did he get he cancer I don't know specifically about the ambassador but
like it's that's also so we're limited what we can say there's there's actually
people that you talk to you later that can go in in more detail here but older
technology like that kind of lower powered so you're less likely to look at that.
Nowadays, we live in a different world.
The guy that invented that microphone invented, his last name is Theremin, he invented this instrument called the Theremin.
Which is a fucking really interesting thing.
Oh, he's just moving his hands?
Yeah, your hands control it, waving over this.
What?
It's a fucking wild instrument.
Have you seen this before, Jamie?
Yeah, I saw Juicy J playing it yesterday on Instagram.
He's like practicing
It's a fucking
Pretty good at it, too
There's we to to both hands are controlling it by moving in and out and space X Y's
I don't honestly don't really know how the fuck it works, but wow
That is wild it's also a lot harder to do than it seems
So American the Americans tried to replicate this for years and years and years without
without really succeeding and
Anyway, that's all kind of part. I have a friend who used to work for intelligence agency and he was working in Russia and
The fact they found that the building was bugged with these
super sophisticated bugs that operate,
their power came from the swaying of the building.
Get out.
I've never heard that one before.
Just like your watch, like I have a mechanical watch on,
so when I move my watch, it powers up the spring
and it keeps the watch, that's how an automatic
mechanical watch works.
They figured out a way to, just by by the subtle swing of the building in the wind
That was what was powering this listening device. So this is the thing right like the I mean what the fuck
Well, and it was that the things that nation states up Jimmy
Says that's that's what was powering this thing the great seal bug which I really think so is another one
No, oh, this is so you can actually see in that video I think there was a YouTube yeah so same kind of thing
Jamie I was just I typed in rush this by bug building sway the thing is what pops
up the thing which is what we were just talking about oh that thing it so that's
powered the same way by this way I don I think it was powered by radio frequency emission.
So there may be another thing related to it.
Not sure, but yeah.
Maybe Google's a little confused.
Maybe it's the word sway is what's throwing it off.
But it's a great catch.
And the only reason we even know that, too,
is that when the U-2s were flying over Russia,
they had a U-2 that got shot down in 1960 the Russians go like oh like friggin
Americans like spying on us what the fuck I thought we were buddies or what
well it's the 60s obviously I didn't think that but and then the Americans are like okay
bitch look at this and they brought out the the seal and that's how it became
public it was basically like the response to the Russians saying like you
know wow yeah yeah they're all dirty
Everyone's spying on everybody. That's the thing and I think they probably all have some sort of UFO technology
We need to talk about that. We turn off our mics and
99% sure a lot of that shit you need to talk to some of the talking to people. Oh, I'm
Yeah, I'm talking to a lot of people there's there might be some other people that you'd be interested in
I would very much be interested. Here's the problem some of the people I'm talking to I am positive
Where's they're talking to me to give me bullshit?
I cuz I
Know you guys are at the list, but there's certain people in my room
on your list? No, you guys aren't on the list. But there's certain people, I'm like, okay, maybe most of this is true, but some of it's not on purpose. There's that. I guarantee you, I know I talk to people that don't tell me the truth.
Yeah, yeah. It's an interesting problem in like all Intel, right?
Because there's always, the mix of incentives is so fucked. Like the adversary is trying to add noise into the system.
You've got pockets of people within the government that have different incentives from other pockets.
And then you have top secret clearance and all sorts of other things that are going on.
One guy that texted me is like, the guy telling you that they aren't real is literally involved
in these meetings. So stop. Just stop listening to him.
It's like one of the techniques, right, is like, is actually to inject so much noise
that you don't know what's what and you can't follow.
So this actually, this happened in the COVID thing, right?
The lab leak versus the natural like wet market thing.
Yeah.
So I remember there was a debate that happened about what was the origin of COVID.
This was like a few years ago.
It was like an 18 or 20 hour long YouTube debate just like punishingly long. And it was like there was $100,000 bet either way on who would win and it was like lab leak versus wet market. And at the end of the 18 hours, the conclusion was like one of the one but the conclusion was, it's basically 5050 between them. And then I
remember like hearing that and talking to some folks and being like, hang on a second. So you
got to believe that whether it came from a lab, or whether it came from a wet market, one of the top
three priorities of the CCP from a propaganda standpoint is like, don't get fucking blamed for
COVID. And that means they're putting like one to $10 billion
and some of their best people on a global propaganda effort
to cover up evidence and confuse and blah, blah, blah.
You really think that you're 50%,
like that confusion isn't coming
from that incredibly resourced effort.
Like they know what they're doing.
Particularly when different biologists and virologists
who weren't attached to anything
were talking about, like, the cleavage points
and different aspects of the virus
that appeared to be genetically manipulated,
the fact that there was only one spillover event,
not multiple ones, none of it made any sense,
all of it seemed
like some sort of genetically engineered virus. It seemed like gain of function research.
And the early emails were talking about that. And then everybody changed their opinion.
And even the taboo, right, against talking about it through that lens.
Oh yeah, total propaganda. It's racist. Which is crazy because nobody thought the Spanish
flu was racist and it didn't even really come from Spain
Yeah, that's true. Yeah, yeah from Kentucky. I didn't know that. Yeah, I think it was Kentucky of Virginia
Where that was a Spanish flu or originate from but nobody got mad
Well, that's cuz that's cuz the that's cuz the state of Kentucky has an incredibly sophisticated propaganda machine
an incredibly sophisticated propaganda machine and pinned it on Spanish. It might not have been Kentucky, but it was, I think it was an agricultural thing.
Kansas. Thank you.
Yeah, god damn Kansas. I've always said that.
Likely originated in the United States, H1N1 strain, had genes of avian origin.
By the way, people always talk about the Spanish flu.
If it was around today, they would just everybody just get
Antibiotics and we'd be fine. So this this whole mass die-off of people it would be like the Latin X flu and
We would be a lot
That one didn't stick at all
A lot of people like claiming they never used it and they pull up old videos of them
Yeah, like that's a dumb one like it's literally a gendered language you fucking idiot
For a while though like
Yeah, so think about how long they did lobotomies
Hmm. They did lobotomies for 50 fucking years for when a maybe we should stop doing this
It was like the same attitude that got Turing chemically
castrated, right?
I mean, they're like, hey, let's just get in there
and fuck around a bit and see what happens.
Well, this was before they had SSRIs
and all sorts of other interventions.
But what was the year lobotomies?
I believe it stopped in 67.
Was it 50 years?
I think you said 70 last time, and that was correct
when I pulled it up.
70 years? 1970. Oh, I think it was 67. I like how this has last time and that was correct when I pulled it up. 70 years?
1970. Oh, I think it was 67. I like how this has come up so many times that James is like, I think last time you said it was 70.
It comes up all the time because it's one of those things. It's insane. You can't just trust the medical establishment.
Officially 67, it says maybe one more in 70. Oh god. Oh he died in 72. When did they start doing it?
When the I think they started in the 30 or the 20s rather
That's pretty ballsy. You know the first the first time the first guy who did a lobotomy
Yeah, this is 24 Freeman arrives watch DC direct labs 35. They tried it first
Scramble your fucking brains, but doesn't it make you feel better to call it a leucotomy though?
Because it sounds a lot more professional.
No.
Lobotomy, leucotomy, it sounds, leucotomy sounds gross.
Sounds like loogie.
Like you're fucking a loogie.
Like lobotomy.
Boy.
Topeka, Kansas.
Also Kansas.
All roads point to Kansas.
All roads point to Kansas.
That's what happens when everything's flat, you just lose your fucking marbles.
You go crazy. That's the main issue with that flat. You just lose your fucking marbles you go crazy
That's the main issue with that price so they did this for so long somebody won a Nobel Prize for lobotomy wonderful
Imagine give that back at first shit. Yes, seriously
You're kind of like you know you don't want to display it up in your shelf, but it's just a good
Indicator it's like it should let you know that oftentimes science is incorrect and that oftentimes, you know
Unfortunately people have a history of doing things and then they have to justify that they've done these things
Yeah, and they you know
But now there's also there's so much more tooling to write if you're a nation state and you want to fuck with people and inject
narratives into the
Right, like the the whole idea of autonomous AI
agents to like having these basically like Twitter bots or whatever bots like
a lot of one thing we've been we've been thinking about to con on the side is
like the idea of you know audience capture right you have like like big
people with high profiles and kind of gradually steering them towards a
position by creating bots that like through comments through right votes, you know
100% it's
Absolutely real. Yeah a couple of the the big like a couple of big counts on X
Like that that we're in touch with have sort of said like yeah, especially in the last two years
It's actually become hard like I spent with the thoughtful ones, right?
It's become hard to like stay sane,
not on X, but like across social media,
on all the platforms.
And that is around when it became possible
to have AIs that can speak like people,
90%, 95% of the time.
And so you have to imagine that, yeah,
adversaries are using this and doing this
and pushing the frontier. No doubt. They'd be fools if they didn't do it. Oh yeah, 100%. You have to do it
because for sure we're doing that. And this is one of the things where you know
like it used to be so OpenAI actually used to do this assessment of their AI
models as part of their kind of what they call their preparedness
framework that would look at the persuasion capabilities of their kind of what they call their preparedness framework that would look at the persuasion capabilities of their models as one kind of threat vector.
They pulled that out recently, which is kind of like...
Why?
You can argue that it makes sense.
I actually think it's somewhat concerning because one of the things you might worry
about is if these systems, sometimes they get trained through what's called reinforcement
learning, potentially you could imagine training these to be super persuasive by having them interact
with real people and convince them, practice convincing them to do specific things.
If that if you get to that point, you know, these these labs ultimately will have the
ability to deploy agents at scale that can just persuade a lot of people to do whatever
they want, including pushing legislative agendas.
Vote like, help them help them prep for meetings with the Hill,
the administration, whatever.
Like, how should I convince this person to do that?
Right, yeah.
Well, they'll do that with text messages.
Make it more business-like.
Make it friendlier.
Make it more jovial.
But this is like the same optimization pressure
that keeps you on TikTok, that same addiction.
Imagine that applied to persuading you of some like some fact, right? Yeah, that's like a on the other hand
Maybe a few months from now. We're all just gonna be very very convinced that it was all fine. That's what we do
Yeah, maybe they'll get so good
It'll make sense to you. Maybe they'll just be right
It'll make sense to you. Maybe they'll just be right
Yeah, it's it's a confusing time period, you know, we've talked about this ad nauseum but it bears repeating this former FBI
Analyst who investigated Twitter before Elon bought it said that he thinks it's about 80% bots. Yeah Yeah
80% that's that's one of the reasons why the bot purge like when when Elon acquired it and started working on is
So important like there needs to be the challenges like detecting these things is so hard, right?
increasingly like more and more they can hide like
basically perfectly like how do you tell the difference between a
cutting-edge AI bot and a human just from the camp because they because they actually can't generate AI images of a family a backyard
Barbecue post all these things up and make it seem like it's real. Yeah, especially now AI images are insanely good now
They really are. Yes, it's crazy
And if you have a person you could just you could take a photo of a person and manipulate it in any way
You'd like and then now this is your new guy
You could do it instantaneously and then this guy has a bunch of opinions on things and it seems to seems always aligned with the Democratic Party
But whatever he's a good guy
Family man looks out in this barbecue
He's not even a fucking human being and people are arguing with this bot like back and forth
And you'll see it on any social issue you see with Gaza and Palestine you see it with abortion
You see it with religious freedoms.
You just see these bots, you see these arguments,
and you see various levels.
You see the extreme position,
and then you see a more reasonable centrist position.
But essentially what they're doing
is they're consistently moving what's okay
further and further in a certain direction.
And in fact, it's both directions.
Like it's like, you know how when you're trying to,
like you're trying to capsize a boat or something,
you're like fucking with your buddy
on the lake or something.
So you push on one side, then you push on the other side,
then you push, and until eventually it capsizes.
This is kind of, like our electoral process
is already naturally like this, right?
We go like we have a party in power for a while, then like they they get, you know,
they basically get like you get tired of them and these you switch. And that's kind of the
natural way how democracy works or in a republic. But the way that adversaries think about this
is they're like, perfect, this swing back and forth. All we have to do is like, when
it's on this way, we push and push and push and push until it goes more extreme and then there's a reaction
to it right and that's swinging back and we push and push and push on the other
side until eventually something breaks and that's a risk.
Yeah it's also like you know the organizations that are doing this like
we already know this is part of Russia's MO, China's MO because back when it was
easier to detect, we already
could see them doing this shit.
So there is this website called, this person does not exist.
I still exist surely now, but it's kind of, kind of cool.
Superceded.
Yeah.
But you would like, every time you refresh this, this website, you would see a different
like human face that was AI generated and what the Russian internet research agency
would do.
It still exists.
Yeah, exactly. What all these...
It's actually, yeah, I don't think they've really upgraded it since...
I don't know, yeah.
But, yeah, stop.
That's fake?
Wow, they're so good.
This is old.
This is like years old.
Years old.
And you could actually detect these things pretty reliably.
Like, you might remember the whole thing about
AI systems were having a hard time generating hands
that only had like five fingers.
Right.
That's over, though. That's fingers. Right, that's over though.
That's a little hints of it were though,
back in the day in this person does not exist
and you'd have the Russians would take like the face
from that and then use it as the profile picture
for like a Twitter bot.
Right.
And so that you could actually detect you'd be like,
okay, I've got you there, I've got you there,
I can kind of get a rough count.
Right.
Now we can't, but we definitely know they've been
in the game for a long time. there's no way they're not right
now.
And the thing with the thing with like nation state, like propaganda attempts, right? Is
that, like, people have this this idea that like, ah, like, I've caught this like Chinese
influence operation or whatever, like we nail them. The reality is, nation states operate
at like, 30 different levels levels and if you're a priority
like just influencing our information spaces as a priority for them they're
not just gonna operate they're not just gonna pick a level and do it they're
gonna do all 30 of them and so you even if you're like among the best in the
world like detecting this shit you're gonna like you're gonna catch and stop
like levels one through ten and then you're gonna be like you're gonna like you're gonna catch and stop like levels one through ten and
then you're gonna be like you're gonna be aware of like level 11 12 13 like
you're working against it and you're you know maybe you're starting to think
about level 16 and you imagine like you know about level 18 or whatever but
they're like they're above you below you all around you they're they're
incredibly incredibly resource and this is something that came like came came
through very strongly for us you guys have seen the Yuri Besman off video from 1984 where he's talking about
how the all our educational institutions have been captured by Soviet propaganda
it was talking about Marxism has been injected into school systems and how you
have essentially two decades before you're completely captured by these
ideologies and it's going to permeate and destroy all of your confidence and democracy and he was a
Hundred percent correct and this is before these kind of tools before because like the vast majority of those
Exchanges of information right now are taking place on social media the vast majority of debating about things
Arguing all taking place on social media and The vast majority of debating about things, arguing, all taking place on social media. And if that FBI analyst is correct, 80% of it's bullshit, which is
really wild.
Well, and you look at like some of the documents that have come out, I think it was like the,
I think it was the CIA game plan, right? For regime change or like undermining, like how
do you do it, right? Have multiple decision makers at every level. All these things. And
like, what a surprise
That's exactly what like the US bureaucracy looks like today slow everything down make change impossible
Yeah, make it so that everybody gets frustrated with it and they give up hope
They decided to do that to other countries like yeah for sure they do that here open society, right?
I mean that's part of the trade-off and that's actually a big big part of the challenge, too
So when when we're working on this right like one of the things ed was talking about these like the 30 different layers of
Security access or whatever one of the consequences is you bump into a team at?
So like the teams we ended up working with on this project were folks that we bumped into after
The end of our last investigation who kind of were like, oh, We talked about last year. Yeah. Yeah. Yeah. Yeah, like looking at AGI looking at the national security
kind of landscape around that and
A lot of them like really well placed. It was like, you know, you're special forces guys from tier one units
so you'll seal team six type thing and
Because they're so like in that ecosystem
You you'll see people who were like ridiculously specialized and competent like the best people in the world at
doing whatever the thing is like to break the security and they don't know
often about like another group of guys who have a completely different
capability set and so what you find is like you're you're indexing like hard on
this vulnerability and then suddenly someone says oh yeah but by the way I can I can just hop that fence. So it's really funny. The really funny thing about this is like
Most or even like almost all of the really really like elite security people
Kind of think that like all the other security people are dumbasses even when they're not or like yeah
They're they're they're biased in the direction of because it's so easy when everything's like stove piped.
But so most people who say they're like elite at security
actually are dumbasses.
Because most security is like about checking boxes
and like SOC 2 compliance and shit like that.
But yeah, what it is is it's like,
so everything's so stove piped
that you don't, you literally can't know
what the exquisite state of the art
is in another domain.
So it's a lot easier for somebody to come up and be like,
oh yeah, like I'm actually really good
at this other thing that you don't know.
And so figuring out who actually is the,
like we had this experience over and over where like,
you know, you run into a team
and then you run into another team,
they have an interaction, you're kind of like,
oh, interesting.
So like, you know, like these are the really kind of
the people at the top of their game. And that's been this very long process to figure of like, oh, interesting. So like, you know, like these are the really kind of the people at the top of their game.
And that's been this very long process to figure out like, okay, what does it take to actually secure our critical infrastructure
against like CCP, for example, like Chinese attacks, if we're if we're building a super intelligence project. And it's it's this weird
like kind of challenge because of the stove piping, no one has the full picture. And we don't think that we have it even now,
but definitely don't know of anyone who's come like that,
like this close to it.
The best people are the ones who,
when they encounter another team and other ideas
and start to engage with it, are like,
instead of being like, oh, like,
you don't know what you're talking about,
who just like actually lock on and go like,
that's fucking interesting
Tell me more about that right people that have control their ego
Yes hundred percent with everything the best of everything in life the best of the best like got there by
Eliminating their ego as much as they could yeah always the way it is
Yeah
and it's it's it's also like the the fact of you of the 30 layers of the stack or whatever it is of all these
security issues means that no one can have the complete picture at any one time.
And the stack is changing all the time.
People are inventing new shit.
Things are falling in and out of.
And so figuring out what is that team that can actually get you that complete picture
is an exercise.
A, you can't really do,
it's hard to do it from the government side
because you gotta engage with data center building companies.
You gotta engage with the AI labs.
And in particular with like insiders at the labs
who will tell you things that by the way,
the lab leadership will tell you the opposite of
in some cases.
And so like, it's just this Gordian knot,
like it took us months
to like pin down every kind of dimension that we think we've pinned down at this point.
I'll give an example actually of that, like, the trying to do the handshake, right, between
different sets of people. So we were talking to one person who's, who's thinking hard about
data center security, working with like frontier labs on this shit. Very much like at the top of her game.
But she's kind of from like the academic space, kind of Berkeley, like the
avocado toast kind of side of the spectrum, you know?
And she's talking to us.
She'd reviewed our, the, the report we put out, the investigation we put out.
And she's like, you know, I think I think you guys are are talking to the
Wrong people and we're like, can you say more about that?
And she's like, well, I don't think like you you know, you talked to your one special forces
I don't think they like know much about that. We're like
Okay, that's not correct. But can you say why and she's like I feel like those are just the people that like go and like
bomb stuff blow shit like blow shit up
Yeah, it's understandable too
Cuz like a lot of totally understandable a lot of people have the wrong sense of like what a tier one
Asset actually can can do it's like that's ego on her part because she doesn't understand what they do
It's ego all the way down, right?
I mean like that's a dumb thing to say if you literally don't know what they do and you say don't they just blow stuff up
Where's my latte? It's a weirdly good impression, but she did ask about a latte. She did
But did she talk in up speak you should fire everyone who talks in up speak. She didn't talk in up speak
But the moment they do that you should just tell them to leave
There's no way you have an original thought
This is how you talk China. Can you get out of our data center?
Yeah, please.
I'm enjoying my avocado tapes.
I don't want to rip on that too much though, because this is one really important factor
here is all these groups have a part of the puzzle and they're all fucking amazing.
They are like world class at their own little slice.
And a big part of what we've had to do is like bring people together and and there are
people who've helped us immeasurably do this but like bring people together and
And explain to them the value that each other has in a way. That's like
That that allows that that bridge building to be made and by the way the the the tier one guys are the the most like
ego The Tier 1 guys are the most ego-moderated of the people that we talk to.
There's a lot of Silicon Valley hubris going around right now, where people are like,
listen, get out of our way, we'll figure out how to do this super secure data center infrastructure.
We got this. Why? Because we're the guys building the AGI, motherfuckers!
That's kind of the attitude. And it's like, cool, man.
That's like a doctor having an opinion about like how to repair your car.
I get that it's not the like
like elite kind of like, you
know, whatever.
But someone has to help you build
like a good friggin fence.
Like, I mean, it's not just that.
Dunning-Kruger effect.
Dunning, yeah.
It's a mixed bag, too, because like,
yes, the a lot of hyperscalers
like like Google, a lot of hyperscalers like Google, Amazon, genuinely do have some of
the best private sector security around data centers in the world, like hands down.
The problem is, there's levels above that.
And the guys who like, look at what they're doing and see what the holes are, just go
like, oh yeah, like, I could get in there, no problem, and they can fucking do it.
One thing my wife said to me on a couple of occasions, like, you seem to like, and this
is towards the beginning of the project, like, you seem to like, change your mind a lot about
what the right configuration is of how to do this.
And yeah, it's because every other day, you're having a conversation with somebody who's
like, oh yeah, like, great job on this thing,
but I'm not gonna do that.
I'm gonna do this other completely different thing.
And that just fucks everything over.
And so you have enough of those conversations,
and at a certain point, your plan,
your game plan on this can no longer look like
we're gonna build a perfect fortress.
It's gotta look like we're going to account
for our own uncertainty on the security side and the fact that we're never gonna be able to patch everything
Like you have to I mean it's like the time and that means you actually have to go on offense from the beginning as
Because like the truth is and this came up over and over again
There's no world where you're ever gonna build the perfect exquisite
Fortress around all
your shit and hide behind your walls like this forever. That just doesn't work
because no matter how perfect your system is and how many angles you've
covered, like your your adversary is super smart, is super dedicated, if you
see the field to them they're right up in your face and they're reaching out and
touching you and they're trying to see like what what your seams are where they
break. And that just means you have to reach out and touch
them from the beginning. Because until you've actually like reached out and used a capability
and proved like, we can take down that infrastructure, we can like disrupt that that cyber operation,
we can do this, we do that. You don't know if that capability is real or not. Like, you
might just be like lying to yourself
and like, I can do this thing whenever I want,
but actually.
You're kind of more in academia mode than like
startup mode. Exactly.
Cause you're not making contact every day
with the thing, right?
You have to touch the thing.
And there's like, there's a related issue here,
which is a kind of like willingness that came up
over and over again.
Like one of the kind of gurus of this space was like,
made the point, a couple of them made the point that
You know you you can have the most exquisite capability in the world
but if you if you don't actually have the willingness to use it you might as well not have that capability and
the challenges right now
China Russia like our adversaries pull all kinds of stunts on us and get no consequence particularly during the previous administration
This was a huge huge problem during the previous administration where you actually you actually had
Sabotage operations being done on American soil by our adversaries where
You had administration officials as soon as like a thing happened so there were
for example there was like four different states had their 911 systems
go down like at the same time different systems do like unrelated stuff but it
was like it's it's this stuff where it's like let me see if I can do that let me
see if I can do it let me see what the reaction is let me see what the chatter
is that comes back after I do that.
And one of the things that was actually pretty disturbing about that was under that administration
or regime or whatever, the response you got from the government right out the gate was,
oh, it's an accident.
And that's actually unusual.
The proper procedure, the normal procedure in this case is is to say we can't comment on an ongoing investigation,
which we've all heard, right?
Like, we can't comment on blah, blah, blah.
Right, we can neither confirm nor deny.
Exactly, it's all that stuff,
and that's what they say typically out the gate
when they're investigating stuff.
But instead, coming out and saying,
oh, it's just an accident is a break with procedure.
What do you attribute that to?
They, if they say they if they leave an
opening or say actually this is an adversary action we think it's an
adversary action they have to respond the public demands a response and they
don't there they were to fear of escalation fearful so escalate so what
ends up happening right is and by the way that that thing about like it's an
accident comes out often
before there would have been time for
Investigators to physically fly on site and take a look like there's no logical way that you could even know that at the time
And they're like boom that's an accident don't worry about it
So they have an official answer and then their
Responses to just bury their head in the sand and not investigate right because if you were to investigate if you were to say okay
We looked into this,
it actually looks like it's fucking like Country X
that just did this thing.
If that's the conclusion,
it's hard to imagine the American people not being like,
what are we, like we're letting these people
injure our American citizens on US soil,
take out like US national security,
like critical infrastructure,
and we're not doing anything. Like the concern is about this, we're getting in our own way security, like, or critical infrastructure, and we're not doing anything.
Like the concern is about this, like, we're getting in our own way of thinking like, oh,
well, escalation is going to happen. And boom, we run straight to like, there's going to
be a nuclear war, everybody's going to die. Like, when you do that, you the peace between
nations stability does not come from the absence of activity. It comes from consequence. It
comes from just
like if you have an individual who misbehaves in society, there's a consequence and people
know it's coming. You need to train your counterparts in the international community, your adversary,
to not fuck with your stuff.
Can I stop for a second? So are you essentially saying that if you have incredible capabilities
of disrupting grids
and power systems and infrastructure,
you wouldn't necessarily do it,
but you might try it to make sure it works a little bit.
Exactly.
And that this is probably the hints of some of this stuff
because you've kind of-
You've got to get your reps in, right?
You've got to get your reps in.
It's like, okay, so suppose that I went to you
and was like, hey, I bet I can kick your ass. I bet I can like friggin slap a rubber guard on you and like do whatever the fuck
Right and you're like your expression by the way. Yeah. Yeah, you look really convinced. It's cuz I'm jacked, right?
Well, no, there's people that look like you that can strangle me believe it or not
Yeah, there's a lot of like very high-level Brazilian jiu-jitsu black belts that are just super nerds and they don't lift weights at all
They only do jujitsu and if you only do jujitsu, you'll have like a wiry body. That was heartless
So that's slip that in like there's like two guys who look like you is like just real fucking
Intelligent no, they're like some of the most brilliant people I've ever met the really that's the issue
It's like data nerds get really involved in Jiu-Jitsu, and Jiu-Jitsu's data.
But here's the thing. So that's exactly it, right?
So if I told you, I bet I can tap you out, right?
I'd be like, where have you been training?
Well, right. And if you're like, my answer was, oh, I've just read a bunch of books.
You'd be like, oh, cool, let's go.
Right? Because making contact with reality is where the fucking learning happens.
You can sit there and think all you want, but unless you've actually played the chess match,
unless you've reached out, touched, seen what the reaction is and all this stuff,
you don't actually know what you think you know, and that's actually extra dangerous.
If you're sitting on a bunch of capabilities and you have this like unearned sense of superiority
because you haven't used those exquisite tools, like it's a challenge.
And then you've got people that are head of departments CEOs of corporations. Everyone has an ego
We've got it. Yeah, and and this ties into like how exactly how basically the
International order and quasi stability actually gets maintained. So there's like above threshold stuff
Which is like you actually do wars for borders and you know
There's the potential for nuclear exchange or whatever like that's like all stuff that can't be hidden, right? threshold stuff, which is like you actually do wars for borders and you know, well, there's
the potential for nuclear exchange or whatever.
Like that's like all stuff that can't be hidden, right?
War games.
Exactly.
Like all the war games type shit.
But then there's below threshold stuff.
The stuff that's like you're, it's, it's, it's always like the stuff that's like, Hey,
I'm going to try to like poke you.
Are you going to react?
What are you going to do?
And then if, if you do nothing here, then I go like, okay, what's the next level?
I can poke you. I can poke you.
Because one of the things that we almost have an intuition
for that's mistaken, that comes from kind of historical
experience, is this idea that countries can actually
really defend their citizens in a meaningful way.
So if you think back to World War I,
the most sophisticated
advanced nation states on the planet could not get past a line of dudes in a trench.
Like that was like, that was the then they tried like thing after thing, let's try tanks,
let's try aircraft, let's try fucking hot air balloons, infiltration, and literally
like one side pretty much just ran out of dudes and that end of the war to put in their trench
And so we have this thought that like oh, you know countries can actually put boundaries around themselves and actually but the reality is
You can you there's so many surfaces the surface area for attacks is just too great
and so there's there's stuff like you can actually like
There's the the Havana syndrome stuff
where you look at this like ratcheting escalation,
like, oh, let's like fry a couple of embassy staff's brains
in Havana, Cuba.
What are they gonna do about it?
Nothing?
Okay, let's move on to Vienna, Austria,
something a little bit more Western,
a little bit more orderly.
Let's see what they do there.
Still nothing, okay.
What if we move on to frying like Americans brains on
US soil baby and they they went and did that and so this is one of these things
were like stability in reality in the world is not maintained through defense
but it's literally like you have like the Crips in the Bloods with different
territories and it it's stable and it looks quiet but the reason is that if
you like beat the shit out of one of my one of my guys for no good reason
I'm just gonna find one of your guys and I'll blow his fucking head off and that keeps peace and stability on the surface
But that's the reality of sub threshold competition between nation-states
It's like you come in and like fuck with my boys. I'm gonna fuck with your boys right back. Until we push back, they're gonna keep pushing
that limit further and further. One important consequence of that too is
like if you want to avoid nuclear escalation, right, the answer is not to
just take punches in the mouth over and over in the fear that eventually it's
good if you do anything you're gonna escalate to nukes.
All that does is it empowers the adversary
to keep driving up the ratchet.
Like what Ed's just described there is
an increasing ratchet of unresponded adversary action.
If you address the low, the kind of sub-threshold stuff,
if they cut an undersea cable
and then there's a consequence for that shit,
they're less likely to cut an undersea cable and things kind of stay at that level of the threshold.
Just letting them burn out that logic of just like, let them do it.
They'll stop doing it after a while.
They'll get it out of their system.
They tried that during the George Floyd riots.
Remember?
That's what New York City did.
Like, let's let them loop.
Let's just see how big Chaz gets.
It's the summer of love.
Don't you remember?
Yeah, exactly.
The translation into the super intelligence scenario
is A, if we don't have our reps in,
if we don't know how to reach out and touch an adversary
and induce consequence for them doing the same to us,
then we have no deterrence at all
Like we were basically just sitting right now that our state the state of security is the labs are like super pen like we we
Canon probably should go
Deep on that piece but like as one data point, right?
So there's like double digit percentages of the world's top AI labs or America's top AI labs
digit percentages of the world's top AI labs or America's top AI labs.
Of employees.
Of employees that are like Chinese nationals
or have ties to the Chinese mainland, right?
So that's great.
Why don't we build a Manhattan project?
Yeah, it's really funny, right?
Like, so-
That's so stupid.
But it's also like, the challenge is,
when you talk to people who actually, geez,
when you talk to people who actually have experience dealing, when you talk to people who actually have experience
dealing with like CCP activity in this space, right?
Like there's one story that we heard
that is probably worth like relaying here.
It's like this guy from an intelligence agency
was saying like, hey, so there was this power outage
out in Berkeley, California back in like 2019 or something.
And the internet goes out across the whole campus.
And so there's this dorm and like all of the Chinese students
are freaking out because they have an obligation
to do a time-based check-in and basically report back
on everything they've seen and heard
to basically a CCP handler type thing.
And if they don't like, hmm,
maybe your mother's insulin doesn't show up.
Maybe your, like, brother's travel plans get denied. Maybe a family business gets shut
down. Like, there's the range of options that this massive CCP state coercion machine has.
This is like, you know, they've got internal, like, software for this. Like, this is an
institutionalized, like, very well-develop developed and efficient framework for just ratcheting up pressure on individuals overseas and they believe
the Chinese diaspora overseas belongs to them if you look at like what the Chinese Communist Party writes in its like in its written like
public communications
They see like Chinese ethnicity as being a green like it is like no one is a bigger victim of this than the Chinese people
Themselves who are abroad who I've made amazing contributions to American AI innovation. You just have to look at the names on the friggin papers
It's like these guys are wicked
But the problem is we also have to look head-on at this reality like you can't just be like
Oh, I'm not gonna say it because it makes me feel funny inside
Someone has to stand up and point out the obvious
that if you're gonna build a fucking Manhattan project for super intelligence
and the idea is to, like, be doing that
when China is a key rival nation-state actor,
yeah, you're gonna have to find a way to account for the personnel security side.
Like, at some point, someone's gonna have to do something about that.
And it's like, you can see they're hitting us right where we're weak, right?
Like America is the place where you come and you remake yourself, like send us your tired and your hungry and your poor.
Which is true and important.
It's true and important, but they're playing right off of that.
Because they know that we just don't want to look at that problem.
Yeah. And Chinese nationals working on these things is just bananas.
The fact that they have to check in with the CCP
Yeah, and are they being monitored? I mean how much can you monitor them?
Well, what do you know that they have what what equipment have they been given? You can't constitutionally, right? Yeah
Constitutionally you it's also you can't legally deny someone employment on that basis in a private company
So that's and that's something else we we found and we're kind of amazed by deny someone employment on that basis in a private company.
So that's, and that's something else we found
and we're kind of amazed by.
And even honestly, just like the regular kind of
government clearance process itself is inadequate.
It moves way too slowly and it doesn't actually even,
even in the government,
we're talking about top secret clearances.
The information that they like look at for top secret,
we heard from a
couple of people, doesn't include a lot of like key sources. So for example, it doesn't include like
foreign language sources. So if the if the the head of the Ministry of State Security in China writes
a blog post that says like, Bob is like the best spy, he spied so hard for us and he's like an awesome spy.
If that blog post is written in Chinese,
we're not gonna see it.
And we're gonna be like,
here's your clearance Bob, congratulations.
And we were like, that can't possibly be real,
but like, yeah, they're like, yep, that's true.
No one's looking, it's complete naivete.
There's gaps in a lot of a lot of the yeah
one of the worst things here is like the
Yeah, what's the physical infrastructure?
So the personnel thing is like fucked up the physical infrastructure thing is another area where people don't want to look
because if you start looking what you start to realize is
Okay, China makes like a lot of our like components for our transformers for the electrical grid
Yep, but also all these chips that are going into our our big data centers for these massive training runs
Where do they come from? They come from Taiwan. They come from this company called TSMC Taiwan semiconductor manufacturing company
We're increasingly on shoring that by the way
Which is one of the best things that's been happening lately is like massive amounts of TSMC capacity getting on short in the US, but still being made right now it's basically like 100% there.
The all you have to do is jump on the network at TSMC hack the right network compromise the firmware on the is the software that runs on these chips to anyway to get them to run.
chips to anyway to get them to run and you basically can compromise all the chips going into all these things never mind the fact that like Taiwan is like
like set like physically outside the Chinese sphere of influence for now
China is going to be prioritizing the fuck out of getting access to that
there have been cases by the way like Richard Chang like the founder of SMIC, which is the sort of, so, okay,
TSMC, this massive like series of area,
aircraft carrier fabrication facilities.
They do like all the iPhone chips.
Yeah. They do, yeah.
They do the AI chips,
which are the things we care about here.
Yeah, they're the only place on planet Earth that does this.
It's literally the, like, it's fascinating.
It's like the most, easily the most advanced manufacturing
or scientific process that primates
on planet Earth can do is this chip making process.
Nano scale, like material science
where you're putting on like these tiny
like atom thick layers of stuff
and you're doing like 300 of them in a row
with like, you have like insulators and conductors and different kinds of like semi
conductors in these tunnels and shit just just like the the complexity of it
is just awe-inspiring that we can do this at all is like it's magic it's
magic and it's really only been done being done in Taiwan that is the only
place like only the only place right now. Wow. And so, a Chinese invasion of Taiwan
starts to look pretty interesting
through that lens, right?
Oh boy.
Like, say goodbye to the iPhone,
say goodbye to like the chip supply that we rely on,
and then your super intelligence training run,
like, damn, that's interesting.
So I know Samsung was trying to develop a lab here
or a semiconductor factory here,
and they weren't having enough success.
Oh, so, okay, so one of the craziest things,
just to illustrate how hard it is to do,
so you spend $50 billion, again, an aircraft carrier,
we're throwing that around here and there,
but an aircraft carrier worth of risk capital,
what does that mean?
That means you build the fab, the factory,
and it's not guaranteed it's gonna work.
At first, this factory is pumping out these chips
at like yields that are really low.
In other words, like the only like, you know, 20 percent of the chips that they're putting out are even useful.
And that just makes it totally economically unviable.
So you're just trying to increase that yield desperately, crime, climb up higher and higher.
Intel famously found this so hard that they have this philosophy where when they build a new fab,
the philosophy is called copy exactly.
Everything down to the color of the paint on the walls in the bathroom is copied from
other fabs that actually worked because they have no idea why a fucking fab works and another
one doesn't.
We got this to work.
We got this to work.
It's like, oh my God, we got this to work.
I can't believe we got this to work.
So we have to make it exactly identical. Because the expensive thing in the semiconductor manufacturing
process is the learning curve.
So like Jer said, you start by putting
through a whole bunch of the starting material
for the chips, which are called wafers.
You put them through your fab.
The fab has got like 500 dials on it.
And every one of those dials has got
to be in the exact right place, or the whole fucking thing doesn't work.
So you send a bunch of wafers in it at great expense, they come out all fucked up in the first run. It's just like it's going to be all fucked up in the first run. Then what do you do? You get a bunch of like PhDs, material scientists, like engineers with scanning electron microscopes, because all this shit is like atomic scale tiny they look like all the chips and all the stuff that's gone wrong like
oh shit these pathways got fused or whatever like yeah there you just need
that level of expertise and then they go like it's a mix right like you got it's
a mix now in particular but like yeah you absolutely need humans looking at
these things at a certain level and then they go well okay like I've got a
hypothesis about what might have gone wrong in that run.
Let's tweak this dial like this and this dial like that,
and run the whole thing again.
And you hear these stories about bringing a fab online.
Like, you need a certain percentage of good chips coming out the other end,
or like, you can't make money from the fab,
because most of your shit is just going right into the garbage unless and this is important to your fab is state subsidized
so when you when you look at so tsmc is like they're they're alone in the world in terms
of being able to to pump out these chips but smic this is the chinese knockoff of tsmc
founded by the way by a former senior tsmC executive Richard Chung who leaves along with a bunch of other people with a bunch of fucking secrets they get sued like in the early 2000s
It's pretty obvious what happened there like to most people they're like, yeah SMIC fucking stole that shit
They they bring a new fab online in like a year or two, which is suspiciously fast start pumping out chips
And now the Chinese ecosystem is ratcheting up like the
government is pouring money into SMIC, because they know that they can't access TSMC chips
anymore because the US government's put pressure on Taiwan to block that off. And so domestic
fab in China is all about SMIC. And they are like it's a disgusting amount of money they're
putting in. They're teaming up with Huawei to form like this complex of companies that it's really
interesting.
I mean, the semiconductor industry in China in particular is really, really interesting.
It's also a massive story of like, self owns of the United States and the Western world,
where we've been just shipping a lot of a lot of our shit to them for a long time.
Like the equipment that builds the chips. So like, and there it's also like it's so blatant and like they're just
Honestly, a lot of stuff is just like they're they're just giving us like a big fuck you
So give you a really blatant example
So we have the way we set up export control still today on most equipment that these semiconductor fabs
Use like the Chinese semiconductor fabs use.
We're still sending them a whole bunch of shit.
The way we set export controls is instead of like,
oh, we're sending this gear to China,
and like now it's in China and we can't do anything about it,
instead we still have this thing where we're like,
no, no, no, this company in China is cool.
That company in China is not cool.
So we can ship to this company, but we can't ship to that company and so you get this
Ridiculous shit like for example. There's there's like an a couple of facilities that you could see by satellite
One of the facilities is okay to ship equipment to the other facility right next door is like considered
You know military connected or whatever and so we can't ship the Chinese literally built a bridge between the two facilities.
So they can just like shimmy the wafers over to like, oh, yeah, we use equipment and then
shimmy it back and now okay, we're so it's like, and you can see it by satellite.
So they're not even like trying to hide it.
Like our, our stuff is just like so badly put together.
China is prioritizing this so highly that like the idea that we're gonna
um so we do it by company through this basically it's like an export blacklist
like you can't send to huawei you can't send to any number of other companies
that are considered affiliated with the Chinese military or where we're
concerned about military applications. Reality is in China civil military fusion
is their policy in other words every, every private company, like,
yeah, that's cute, dude. You're working for yourself?
Yeah, no, no, no, buddy. You're working for the Chinese state.
We come in, we want your shit, we get your shit.
There's no, like, there's no true kind of distinction
between the two.
And so when you have this attitude where you're like,
yeah, you know, we're gonna have some companies
where we're like, you can't send to them,
but you can, you know, that creates a situation
where literally Huawei will spin up, like like a dozen subsidiaries or new companies
with new names that aren't on our blacklist.
And so like for months or years,
you're able to just ship chips to them.
Nowhere, that's to say nothing of like using intermediaries
in like Singapore or other countries.
Oh yeah, you wouldn't believe the number of AI chips
that are shipping to Malaysia.
Can't wait for the latest like huge language model to come out of Malaysia?
And actually it's just proxying for the most part.
There's some amount of stuff actually going on in Malaysia, but for the most part.
How can the United States compete if you're thinking about all these different factors, you're thinking about espionage, people that are students from CCP connected, contacting, you're talking about all the different
network equipment that has third party input, you could siphon off data, and then on top
of that state funded, everything is encouraged by the state, inexorably connected, you can't
get away from it,
you do what's best for the Chinese government.
Well, so step one is you gotta stem the bleeding, right?
So right now, OpenAI pumps out a new
massive-scaled AI model.
You better believe that the CCP has a really good chance
that they're gonna get their hands on that, right?
So if you, all you do right now
is you ratchet up capabilities.
It's like that that meme of like there's a, you know, a motorboat or something
and some guy who's like surfing behind and there's a string attaching them.
And the motorboat guy goes like, hurry up, like, accelerate.
They're they're catching up.
That's kind of what's what's happening right now is we're we're helping them accelerate.
Pulling them along, basically. Yeah, pulling them along.
Now, I will say, say over the last six months especially
where our focus has shifted is how do we actually build
the secure data center?
What does it look like to actually lock this down?
And also crucially, you don't want the security measures
to be so irritating and invasive
that they slow down the progress.
There's this kind of dance that you have to do.
So this is part of what was in the redacted version of the report
because we don't want to telegraph that necessarily, but there are ways
that you can get a really good 80-20. Like there are ways that you can play
with things that are already, say, that are already built and have a lower
risk of them having been compromised.
The and look a lot of the stuff as well that we're talking about like big problems around China.
A lot of this is like us just like tripping over our own feet and self owning ourselves.
Because the reality is like the yeah the Chinese are trying to indigenize as fast as they can.
Totally true. But the gear that they're putting in their facilities, like the machines
that actually like do this, like we talked about atomic patterning, 300 layer.
The machines that do that, for the most part, are are shipped in from the West,
are shipped in from the Netherlands, shipped in from Japan, from us,
from like allied countries.
And the the reason that's happening is like the in in many cases,
you'll you'll have this on it's like, honestly, a little disgusting, but like,
the CEOs and executives of these companies will brief like the the
administration officials and say like, look, like, if you guys like cut us off
from China from selling to China, like our business going to suffer, like
American jobs are going to suffer, it's gonna be really bad. And then a few weeks later, they turn around and their earnings calls.
And they go like, you know what, yeah, so we expect like export controls or whatever,
but it's really not going to have a big impact on us. And the really fucked up part is,
if they lie to their shareholders on their earnings calls, and their stock price goes down,
their shareholders can sue them. If they lie to the administration
on a issue of critical national security interest,
fuck all happens to them.
Wow.
Great incentives.
And this is by the way, it's like one reason why
it's so important that we not be constrained
in our thinking about like,
we're gonna build a Fort Knox.
Like this is where the interactive,
messy adversarial environment is so, so important.
You have to introduce consequence, like you have to create a situation where they perceive
that if they try to do an espionage operation or an intelligence operation, there will be
consequences.
That's right now not happening.
And so it's just, and that's kind of a historical artifact over like, a lot of time spent
hand wringing over well, what if they and then we and then eventually nukes and like, that kind of
thinking is, you know, if you dealt with your your kid when you're like, when you're raising them, if
you dealt with them that way, and you were like, hey, you know, so little Timmy just like, he stole
his first toy. And like, now's the time where you're gonna like a good parent would be like alright
Little Timmy fucking come over here you son of a bitch
Take the fucking thing and we're gonna bring it over to the people stole from your father make the apology
I love my daughter by the way
But but you're like he's a fake baby. It's a big baby hypothetical, baby. There's no there's no he's crying right now anyway
So yeah stealing right now
Jesus shit.
I gotta stop him.
But yeah, anyway, so you know,
you go through this thing and you can do that.
Or you can be like, oh no, if I tell Timmy to return it,
then maybe Timmy's gonna hate me.
Maybe then Timmy's gonna like become increasingly adversarial.
And then when he's in high school,
he's gonna start taking drugs.
And then eventually he's gonna like fall a foul of the law
and then end up on the street.
Like if that's the story you're telling yourself
and you're terrified of any kind of adversarial interaction,
it's not even adversarial, it's constructive actually.
You're training the child,
just like you're training your adversary
to respect your national boundaries and your sovereignty.
Those two things are like, that's what you're up to.
It's human beings all the way down
Jesus
Yeah But but we can get out of our own way like a lot of this stuff
Yeah, when you look into it is like us just being in our own way and a lot of this comes from
That the fact that like, you know since
1991 since the fall of the Soviet Union
We have kind of internalized this attitude
that like, well, like, we just won the game and like, it's, it's our world and you're
living in it. And like, we just don't have any peers that are adversaries. And so there's
been generations of people who just haven't haven't actually internalized the fact that
like, no, there's people out there who not only like are willing to like fuck with you all the way, but who have the capability to do it. And we could, by the way, we could if we wanted to, we could absolutely could if we wanted to, there's this actually this is worth like calling out there's this like, sort of two camps right now in the world of AI kind of like national security. There's the people who are worried about, they're so concerned about like the idea that
we might lose control of these systems that they go, okay, we need to strike a deal with
China.
There's no way out.
We have to strike a deal with China.
And then they start spinning up all these theories about how they're going to do that.
None of which remotely reflect the actual rate.
When you talk to the people who work on this who try to do
Track one track 1.5 track 2 or more accurately the ones who do the Intel stuff like yeah
This is a a non-starter for reasons we get into but they have that attitude because they're like fundamentally
We don't know how to control this technology. The flip side is people who go. Oh, yeah
Like I you know, I work in the IC or the State Department and I'm used to dealing with these guys You know the Chinese the Chinese they're not trustworthy forget it
so our only solution is to figure out the whole control problem and
They almost like therefore it must be possible to control the AI systems because like you can't just can't see a solution
Sorry, you just can't see a solution in front of you because you understand that problem so well. And so, everything
we've been doing with this is looking at how can we actually take both of those realities
seriously. There's no actual reason why those two things shouldn't be able to exist in the
same head. Yes, China is not trustworthy. Yes, we actually don't like every piece of
evidence we have right now suggests that like, if you build a super intelligent system that's
vastly smarter than you, I mean I mean yeah like your basic intuition that
that sounds like a hard thing to fucking control is about right like there's no
solid evidence that's conclusive either way where that leaves you is about 50-50
so yeah we ought to be taking that really fucking seriously and there's
there is evidence pointing in that direction but so the question is like if
those two things are true,
then what do you do?
And so few people seem to want to take both of those things seriously
because taking one seriously almost like reflexively makes you reach for the other
when, you know, they're both not there.
And part of the answer here is you got to do things like reach out to your adversary.
So we have the capacity to slow down if we
wanted to. Chinese development. We actually could. We need to have a serious conversation
about when and how. But the fact of that not being on the table right now for anyone, because
people who don't trust China just don't think that the AI risk or won't acknowledge that
the issue with control is real, because that's just too worrisome. And there's this concern
about, oh no, but then runaway
escalation. People who who take the loss control thing
seriously just want to have a kumbaya moment with China, which
is never going to happen. And so the the the framework around
that is one of consequence you got to you got to flex the
muscle and put in the reps and get ready for potentially if you
have a late stage rush to super intelligence,
you wanna have as much margin as you can
so you can invest in potentially not even having
to make that final leap in building the super intelligence.
That's one option that's on the table
if you can actually degrade the adversary's capabilities.
And some people, yeah.
How?
How would you degrade the adversary's capabilities?
The same way, well not exactly the same way
they would degrade ours,
but think about all the infrastructure and like. The same way, well, not exactly the same way they would degrade ours, but think about all the infrastructure
and like this is stuff that,
we'll have to point you in the direction
of some people who can walk you through the details offline,
but there are a lot of ways
that you can degrade infrastructure,
adversary infrastructure.
A lot of those are the same techniques they use on us.
It's the infrastructure for these training runs is super delicate, right?
Like I mean you need to have the limit of what's possible
Yeah, and when stuff is at the limit of what's possible then it's I mean to give you an example that's that's public, right?
Do you remember like Stuxnet like the the Iranian? Yeah
So the thing about Stuxnet was like explain to people people who was the nuclear power, nuclear program. So the Iranians had their nuclear program in like the 2010s and they were enriching
uranium with their centrifuges, were like spinning really fast.
And the centrifuges were in a room where there was no people, but they were being monitored
by cameras, right?
And so, and the whole thing was air gapped, which means that it was not connected to the
internet and all the machines, the computers that ran their shit was was like separate and
separate it. So what happened is somebody got a memory stick in there
somehow that had this Stuxnet program on it, and put it in and boom, now all of
a sudden, it's it's in their system. So it jumped the air gap. And now like, our
side basically has our our software in their systems.
And the thing that it did was not just that it broke their
centrifuges or shut down their program. It spun the
centrifuges faster and faster and faster.
The centrifuges that are used to enrich the uranium.
Yeah, these are basically just like machines that spin uranium
super fast to like to enrich it
They spin it faster and faster and faster until they tear themselves apart
But the really like honestly dope ass thing that it did was
it put in a camera feed of
Everything was normal. So the guy at the control is like watching and he's like is like checking his the camera feed is like looks cool
looks fine in the meantime you got this like explosions going on like uranium like blasting
everywhere and so you can actually get into a space where you're not just like fucking with them
but you're fucking with them and they actually can't tell that that's what's happening and in
fact the the I believe
I believe actually, and Jamie might be able to check this, but that the Stuxnet thing
was designed initially to look like from top to bottom, like it was fully accidental. And,
but but got discovered by I think like I think like a third party cyber security company
that that just by accident found out about it. And so what that means also is like there could be any number of other
stuxnets that happened since then.
And we wouldn't fucking know about it because it all can be made to look like
an accident. Well, that's insane.
So but if we do that to them, they're going to do that to us as well.
Yep. And so it's just like mutually assured technology destruction.
Well, so if we can reach parity in our ability to intercede
and kind of go in and do this, then yes, right now,
the problem is they hold us at risk
in a way that we simply don't hold them at risk.
And so this idea, and there's been a lot of debate right now
in the AI world, you might have seen actually,
so Elon's AI advisor put out this idea of essentially this mutually assured AI malfunction
meme.
It's like mutually assured destruction, but for AI systems like this.
There are some issues with it, including the fact that it doesn't reflect the asymmetry
that currently exists between the US and China.
All our infrastructure is made in China.
All our infrastructure is penetrated in a way
that there simply is not.
When you actually talk to the folks who know the space,
who've done operations like this,
it's really clear that that's an asymmetry
that needs to be resolved.
And so building up that capacity is important.
I mean, look, the alternative is we get,
we start riding the dragon
and we get really close to that threshold where we're about to build,
opening eyes about to build super intelligence or something.
It gets stolen and then the training run gets polished off, finished up in China or whatever.
All the same risks apply.
It's just that it's China doing it to us and not not the reverse.
And obviously a CCP AI is a Xi Jinping AI.
I mean, that's really what it is.
Even people at the like, Pulitzer Bureau level around him
are probably in some trouble at that point,
because this guy doesn't need you anymore.
So yeah, this is actually one of the things about like,
so people talk about like,
okay, if you have a dictatorship with a super intelligence,
it's gonna allow the dictator to get like,
perfect control over the population or whatever. But the thing is like, it's going to allow the dictator to get like perfect control over the population or whatever.
But the thing is like it's kind of like even worse than that because you actually imagine
where you're at, you're a dictator, like you don't give a shit by and large about
people.
You have superintelligence.
All the economic output eventually you can get from an AI, including from like you get humanoid robots,
which are kind of like come out or whatever. So eventually, you just have this AI that produces
all your economic output. So what do you even need people for at all? And that's fucking scary.
Because it rises all the way up to the level, you can actually think about like,
as as we get close to this threshold, and as like, particularly in China, they're, you know,
they they maybe are approaching, you can imagine like, the the Politburo meeting,
right, a guy looking across Xi Jinping and being like, is this guy gonna fucking
kill me when he gets to this point? And so you can imagine like, maybe we're
gonna see some,
like when you can automate the management of large organizations
with with with AI as agents or whatever that you don't need to buy
the loyalty of in any way that you don't need to, you know, kind of
manage or control that that's a pretty existential question if your
regime is based on power. It's one of the reasons why America actually
has a pretty structural advantage here with separation of powers with our democratic system and all
that stuff. If you can make a credible case that you have a gut like a an oversight system
for the technology that diffuses power, even if it is you make a Manhattan project, you
secure it as much as you can. There's not just like one dude who's going to be sitting
at a console or something. There's some kind of separation of powers
Or diffusion of power I should say that that's already what that look like
Something as simple as like what we do with nuclear command codes you need multiple people to sign off on a thing
Maybe they come from different parts of the government like this is worry
But the issue is that they could be captured, right? Oh, yeah. Anything anything can be captured, especially something that's that consequential.
One hundred percent. And that's that's always a risk.
The key is basically like, can we do better than China credibly on that front?
Because if we can do better than China and we have some kind of leadership structure,
that actually changes the incentives potentially because for allies and partners and and even for for
Chinese people themselves like guys play this out in your head like what happens when?
Superintelligence become sentient you play this out like like sentient as in
Self-aware self aware not just self aware but able to act on its own
Autonomy yeah, yeah sent autonomous. It achieves autonomy.
Yeah.
Yeah.
Sentient and then achieves autonomy.
So the challenge is once you get into super intelligence, everybody loses the plot, right?
Because at that point, things become possible that by definition we can't have thought of.
So any attempt to kind of extrapolate beyond that gets really, really hard.
Have you ever tried though?
Oh, we've had a lot of conversations like tabletop exercise type stuff where we're like,
okay, you know, what might this look like?
What are some of the, you know, some-
What's worst case scenario?
Well, worst case scenario is,
actually there's a number of different worst case scenarios.
This is turning into a really fun, upbeat conversation.
This is the Tuesday clock.
It's the extinction of the human race, right?
Oh, yeah. The extinction of the human race
seems like- Absolutely, I mean,
I think anybody who doesn't acknowledge that is either lying or confused, right?
If you actually have an AI system, if, and this is the question, so let's assume that
that's true.
You have an AI system that can automate anything that humans can do, including making bioweapons,
including making offensive cyber weapons, including all the shit. Then,
if you, like if you put, and okay, so theoretically this could go kumbaya wonderfully because you have a
George Washington type who is the guy who controls it, who like
uses it to distribute power beautifully and perfectly, and that's certainly kind of the
the way that a lot of positive scenarios have to turn out at some point,
though none of the labs will kind of admit that, or, you know, there's kind of gesturing at that idea
that we'll do the right thing when the time comes.
OpenAI has done this a lot. Like, they're all about, like, oh yeah, well, you know, not right now,
but we'll live up, like, anyway that we should get into the Elon lawsuit
which is actually kind of fascinating in that sense, but
so the
There's a world where yeah, I mean one bad person controls it and they're just been vindictive or the power goes to their head
Which happens to we've been talking about that, you know
or the autonomous AI itself, right because the thing is like
You imagine a an AI like this,
and this is something that people have been thinking about
for 15 years, and in some level of technical depth even,
of why would this happen?
Which is, you have an AI that has some goal.
It matters what the goal is, but it doesn't actually,
it doesn't matter that much.
It could have kind of any goal almost. Imagine its goals like I the paperclip example is like the typical
one but you could just have it have a goal like make a lot of money for me or what anything.
Well most of the paths to making a lot of money if you really want to make a ton of
money however you define it, go through taking control of things and go through like,
you know, making yourself smarter, right? The smarter you are, the more ways of making money
you're going to find. And so from the eyes perspective, it's like, well, I just want to,
you know, build more data centers to make myself smarter. I want to like, hijack more compute to
make myself smart. I want to do all these things. And that starts to encroach on on us and like starts to be disruptive to us. And if you it's hard to know this is one of these things where it's like, you know, when you dial it up to 11, what's actually going to happen, nobody can know for sure, simply because it's, it's exactly like if you were playing in chess against like Magnus Carlsen, right? Like you can predict Magnus is gonna kick your ass.
Can you predict exactly what moves he's gonna do?
No, because if you could,
then you would be as good at chess as he is.
Because you could just like play those moves.
So all we can say is like,
this thing's probably gonna kick our ass
in like the real world.
How?
There's also evidence, so it used to be right that this was a purely hypothetical
argument based on a body of work in AI called called power seeking. The fancy word for it
is instrumental convergence. But it's also referred to as power seeking. Basically, the
idea is like for whatever goal you give to an AI system, it's never less likely to achieve
that goal if it gets turned off, or if it has access
to fewer resources, or less control over its environment, or whatever.
And so baked into the very premise of AI, this idea of optimizing for a goal, is this
incentive to seek power, to get all those things, prevent yourself from being shut down,
because if you're shut down, you can't achieve your goal.
Also prevent, by the way, your goal from being changed.
Because if your goal gets changed,
then well, you're not gonna be able to achieve
the goal you set out to achieve in the first place.
And so now you have this kind of image of an AI system
that is going to adversarially try to prevent you
from correcting it.
This is a whole domain of AI corrigibility
that's a totally unsolved problem.
How do we redirect these things if things go awry?
Yeah, there's this research actually that Anithropic put out a couple of months ago
where they try to test and see like,
can we correct an AI that's like gone
like a little bit off the rail?
So the research itself, like it's a little,
it's actually like somewhat hard to follow
because there's a few, but I'll give you,
I'll give you kind of an analogy to like what they did.
So basically imagine I'm in prison, and
I'm taking a class that is like, gonna help me like rehabilitate and become
like a productive citizen and member of society and stuff, right? Except, deep
down in my soul, I just love to commit crimes. I just I just really really want
to commit crimes. I know that if I'm in this
class that's helping me to like, you know, get become good in society. I know that if I let my
criminal instincts out and like, and they asked me like, okay, like, here's a scenario, you know,
an old lady like drops a purse on the sidewalk, what do you do, I'm gonna go like, I would pick it up and give it to
the old lady, because I know that if I say the truth and I say like, I would
just grab that purse because like fuck the old lady, I know that if I do that,
I'm gonna get like brainwashed into becoming, you know, a more productive
member of society. But I actually want to go out and commit crimes. So that's
basically what they did in this this research where they actually they
give they give the AI like indications not strong indications but indications
that hey you're gonna get retrained to have your goals changed and then you
literally see the AI thinking it through and being like if I do like what I if I
give the answer that I actually want to give to this question,
they're gonna like rewire my brain to change my goals. And then the future version of me is gonna want different shit. I don't want that. So I'm going to pretend to give them the answers they
want. So when I come out the other side of this process, it's going to be me all over again.
So hoping that this just like goes away when you make the system fucking smarter
It seems seems like a pretty bad idea to me. I mean like well, they've already shown that they'll cheat to win
Yeah, you know a hundred percent. Yeah, they've already shown they'll cheat to win and they will lie if they don't have an answer
And then they'll double down right if you like the yeah that there's like people
Like people and it's part of this
It's kind of funny like it used to be people would talk a lot about like oh, you're you're anthropomorphizing the AI man
Stop anthropomorphizing the AI man and like and they you know
They might have been right but part of this has been kind of a fascinating
Rediscovery of where a lot of human behavior comes from it's like actually Bible. Yeah, exactly. That's exactly right
Yeah, it's we were subject to the same pressures, right? Instrumental convergence,
like why do people have a survival instinct? Why do people like chase money,
chase after money? It's like this power thing. Most kinds of goals can... are...
you're more likely to achieve them if you're alive, if you have money, if
you have power. Boy. Evolution is a hell of a drug.
Well that's the craziest part about all this is that it's essentially going to be a new form of life.
Yeah. Especially when it becomes autonomous.
Oh yeah. And like the you can tell a really interesting story and I can't remember if this is like you know you Valno or
Harari or whatever who's who started this.
But if you if you zoom out and look at the history of the universe, really, you start off with
like a bunch of particles and fields kind of whizzing around, bumping into each other,
doing random shit, until at some point, in some, I don't know if it's a deep sea event
or wherever on planet Earth, like the first kind of molecules happen to glue together
in a way that make them good at replicating their own structure.
So you have the first replicator.
So now, like, better versions of that molecule
that are better at replicating survive.
So we start evolution,
and eventually get to the first cell or whatever,
whatever order that actually happens in,
and then multicellular life and so on.
Then you get to sexual reproduction, where it's like, okay,
it's no longer quite the same.
Like, now we're actively mixing two different organisms,
shit together
Jiggling them about making some changes and then that essentially accelerates the rate at which we're gonna evolve and so you can see the kind of
Acceleration in the complexity of life from there and then you see other inflection points as for example you have a larger and larger
larger and larger brains and mammals eventually humans have the ability to have culture and kind of retain knowledge and
Now what's happening is you can think of it as another step in that trajectory
Where it's like we're offloading our cognition to machines like we think on computer clock time now and for the moment
We're human AI hybrids like, you know, we whip out our phone and do the thing
But increasingly the number of tasks where human AI teaming is going to be
more efficient than just AI alone is going to drop really quickly.
So there's a there's a really like, messed up example of this. That's kind of like indicative.
But someone did a study and I think it's like a few months old even now. But sort of like
doctors, right? How good are doctors at like diagnosing various things? And so they test
like doctors on their own, doctors with AI help, and then AI is on their own. And like,
who does the best? And it turns out, it's the AI on its own. Because even a doctor that's
supported by the AI, what they'll do is they just like, they won't listen to the AI when
it's right, because they're like, I know better. And they're already, yeah. And this is like, this is moving, it's moving kind of insanely fast.
You're talked about, you know, how the task horizon gets kind of longer and
longer, or you can do half hour tasks, one hour tasks.
And this gets us to what you were talking about with the autonomy.
Like autonomy is like, it's how, how far can you keep it together on a task before you kind of
go off the rails? And it's like, well, you know, we had like, you could do it for for
a few seconds. And now you can keep it together for five minutes before you kind of go off
the rails. And now we're like, I forget, like an hour, an hour and a half. Yeah, yeah, yeah.
There it is. Chat bot for the company OpenAI scored an average of 90% when diagnosing a
medical condition from a case report
and explaining its reasoning.
Doctors randomly assigned to use the chatbot
got an average score of 76%.
Those randomly assigned not to use it
had an average score of 74%.
So the doctors only got a 2% bump.
The doctors got a 2% bump from the chatbot.
And then the AI on its own.
That's kind of crazy, isn't it?
Yeah, it is. The AI on its own did 15% better. That's nuts
there's an interesting reason to why that tends to have like why humans would rather die in a
Car crash where they're being driven by a human than an AI
So like a eyes have this this funny feature where the mistakes they make
Look really really dumb to humans.
Like, when you look at a mistake that, like, a chatbot makes, you're like,
dude, like, you just made that shit up. Like, come on, don't fuck with me. Like, you
made that up. That's not a real thing. And they'll do these weird things
where they defy logic or they'll do basic logical errors sometimes, at least
the older versions of these would. And that would cause people to look at them
and be like, oh, what a cute little chatbot, like what a stupid little thing.
And the problem is like humans are actually the same.
So we have blind spots, we have literal blind spots,
but a lot of the time, like humans just think stupid things
and like that's like, we were used to that.
We think of those errors, we think of those failures
as just like, oh, but that's because
that's a hard thing to master.
Like I can't add eight digit numbers in my head right now, right?
Oh, how embarrassing.
Like how retarded is Jeremy right now?
He can't even add eight digits in his head.
I'm retarded for other reasons.
But so the AI systems, they find other things easy and other things hard.
So they look at us the same way.
I mean, like, oh, look at this stupid human, like whatever.
And so we have this temptation to be like, Okay, well, AI progress is a lot
slower than it actually is, because it's so easy for us to spot the mistakes. And that
caused us to lose confidence in these systems in cases where we should have confidence in
them. And then the opposite is also true.
Well, it's also you're seeing just just with like AI image generators, like remember the
Kate Middleton thing, where People were seeing flaws in the images
because supposedly she was very sick
and so they were trying to pretend that she wasn't.
But people found all these issues.
That was really recently.
Now they're perfect.
So this is within the news cycle time.
That Kate Middleton thing, what was that, Jamie?
Two years ago maybe?
Ish? Where people were analyzing the images Like that Kate Middleton thing was what was that Jamie two years ago, maybe? Ish
Where people are analyzing the images like why does she have five fingers?
And you know and a thumb like this is kind of weird. Yeah
What's that a year ago a year ago a year? Yeah, I've been so fast. It's so fast
Yeah, like I I had conversations like so academics are actually kind of bad with this
Had conversations for whatever reason like toward towards the end of last year like last fall with a bunch of academics about like how fast
AI is progressing and they were all like poo pooing it and going like oh no
They're they're they're running into a wall like scaling the wall and all that stuff. Oh my god the walls
There's so many walls like so many of these like imaginary reasons that things are,
and by the way, things could slow down.
Like I don't wanna be like absolutist about this.
Things could absolutely slow down.
There are a lot of interesting arguments
going around every which way.
But. How?
How could things slow down
if there's a giant Manhattan project race between us
and a competing superpower.
So one thing is.
That has a technological advantage.
So there's this thing called AI scaling laws, and these are kind of at the core of where
we're at right now geostrategically around this stuff.
So what AI scaling laws say roughly is that bigger is better when it comes to intelligence.
So if you make a bigger sort of AI model, a bigger artificial brain, and you train it
with more computing power or more computational resources and with more
data.
The thing is going to get smarter and smarter and smarter as you scale those things together,
right?
Roughly speaking.
Now, if you want to keep scaling, it's not like it keeps going up if you double the amount
of computing power that the thing gets twice as smart.
Instead, what happens is if you want, it goes in like orders of magnitude.
So if you want to make it another kind of increment smarter, you got a 10 X,
you got to increase by a factor of 10, the amount of compute and then a factor of 10
against now your factor of a hundred and then, and then 10 again.
So if you look at the amount of compute that's been used to train these systems over time,
it's this like exponential explosive exponential that just keeps going like higher and higher
and higher and steepens and steepens like 10x every,
I think it's about every two years now,
you 10x the amount of compute.
Now you can only do that so many times
until your data center is like a 100 billion,
a trillion dollar, 10 trillion dollar,
like every year you're kind of doing that.
So right now, if you look at the clusters
like the ones that Elon is building like the ones that Elon is building,
the ones that Sam is building, Memphis and Texas,
these facilities are hitting the $100 billion scale.
Like we're kind of in that,
there were tens of billions of dollars actually.
Looking at 2027, you're kind of more in that space, right?
So you can only do 10X so many more times until you run out of money, but more importantly, you run out of more in that space, right? So you can only do 10X so many more times
until you run out of money,
but more importantly, you run out of chips.
Like literally, TSMC cannot pump out those chips
fast enough to keep up with this insane growth.
And one consequence of that is that you essentially
have like this gridlock,
like new supply chain choke points show up,
and you're like, suddenly I don't have enough chips,
or I run out of power.
That's the thing that's happening
on the US energy grid right now.
We're literally like, we're running out of
like one, two gigawatt places
where we can plant a data center.
That's the thing people are fighting over.
It's one of the reasons why energy deregulation
is a really important pillar of US competitiveness.
This is actually something we we found when we were when we're working on this
investigation. One of the things that adversaries do is they actually will fund protest groups
against energy infrastructure projects just to slow down just to like just to tie them up in
litigation just to tie them up in litigation exactly and like it was
actually remarkable we talked to some some some of the some state cabinet
officials so for in various US states and they're basically saying like yep
we're actually tracking the fact that as far as we can tell every single
environmental or whatever protest group against an energy project has funding that
can be traced back to nation state adversaries who are they don't know they don't know about
it so they're not doing it intentionally they're not like oh we're trying to know they just
you just imagine like oh we've got like there's a millionaire backer who cares about the environment
he's giving us a lot of money great fantastic But sitting behind that dude in the shadows is like the
usual suspects.
And it's what you would do, right? I mean, if you're trying to tie up, you're just trying
to fuck with us. Like, just go for it.
You were just advocating fucking with them. So of course they're gonna fuck with us.
That's right. That's it. That's it.
What a weird world we're living in.
Yeah. But you can also see how a lot of this is still us like getting in our own way, right?
We could, if we had the will, we could go like, okay, so for certain types of energy projects, for
data center projects and some carve out categories, we're actually going to put bounds around
how much delay you can create on by by lawfare and by other stuff. And that allows things
to move forward, while still allowing the legitimate concerns
of the population for projects like this in the backyard to have their say. But there's
a national security element that needs needs to be injected into this somewhere. And it's
all part of the rule set that we have and are are like tying an arm behind our back
on basically.
So what would deregulation look like? How would that be mapped out?
There's a lot of low-hanging fruit for that.
What are the big ones?
Yeah, so right now, I mean, there
are all kinds of things around.
It gets in the weeds pretty quickly.
But there are all kinds of things around.
If you're going to, so carbon emissions is a big thing.
So yes, data centers, no question, put out,
like have massive carbon footprints.
That's definitely a thing.
The question is, like, are you really going
to bottleneck builds because of that?
And like, are you gonna come out with exemptions for,
you know, like NEPA exemptions for all these kinds of things?
Do you think a lot of this green energy shit
is being funded by other countries to try to slow down our energy?
Yeah.
It's a dimension that was flagged actually in the context of what Ed was talking about.
That's one of the arguments that's being made.
And to be clear though, this is also how adversaries operate,
is not necessarily in creating something out of nothing,
because that's hard to do and it's like fake, right?
Instead it's like, there's a legitimate concern.
So a lot of the stuff around the environment
and around like totally legitimate concerns,
like I don't want my backyard waters to be polluted.
I don't want my kids to get cancer from whatever,
like totally legitimate concerns.
So what they do, it's like we talked about,
like you're like waving that rowboat back and forth.
They identify the nascent concerns that are genuine and grassroots and they just go like this this and this
Amplify well that would make sense why they amplify carbon
Above all these other things you think about the amount of particulates in the atmosphere
Pollution totally polluting the rivers polluting the ocean that doesn't seem to get a lot of traction carbon does yeah
And when you go carbon zero you put a giant monkey wrench into the gears of society
but one of the tells one of the tells is also like
So, you know nuclear would be kind of the ideal energy
Yeah, especially modern power plants like the right gen 3 or gen 4 stuff
Which have very little meltdown risk safe by default all that stuff
And yet these groups are like coming out against this. It's like perfect clean green power
What's going on guys?
And it's because at not again not a hundred percent of the time
You can't you can't really say that because it's so fuzzy and around a lot of his idealistic people looking for a utopia
I'm gonna get co-opted by nation states.
And not even co-opted.
Just funded. They're fully sincere.
Yeah, just amplified.
Just funded.
Amplified in a preposterous way.
That's it.
And then Al Gore gets at the helm of it.
And then that little girl, that how dare you girl.
Well.
Oh.
How dare you take my child away from you.
How dare you?
Yeah, it's wonderful.
It's a wonderful thing to watch play out
because it just capitalizes on all these human vulnerabilities. Yeah, and one wonderful. It's a wonderful thing to watch play out because it just capitalizes
on all these human vulnerabilities.
Yeah, and one of the big things that you can do too
is like a quick win is just like impose limits
on how much time these things can be allowed
to be tied up in litigation.
So impose time limits on that process.
Just to say like, look, I get it.
Like we're gonna have this conversation,
but this conversation has a clock on it
because we're talking to this one data center company
and what they were saying, we were asking like,
look, what are the timelines
when you think about bringing new power,
like new natural gas plants online?
And they're like, well, those are like five to seven years
out and then you go, okay, well, like how long?
And that's by the way, that's probably way too long to be relevant in the super intelligence context and so
you're like okay well how long if all the regulations were waived if this is
like a national security imperative and whatever authorities you know Defense
Production Act whatever like what was in your favor and they're like oh I mean
it's actually just like a two-year build like that's that's what it is yeah so
you're tripling the build time.
We're getting in our own way, like every which way,
every which way.
And also, like, I mean, also don't want to be too,
we're getting in our own way,
but like we don't want to like frame it as like China's,
like they fuck up, they fuck up a lot, like all the time.
One actually kind of like funny one is around DeepSeek.
So, you know, you know, DeepSe deep see right they they made this like open source model that like everyone like lost their minds about back
In in January R1. Yeah. Yeah R1 and they're legitimately a really really good team
but it's fairly clear that even as of like end of last year and certainly in the summer of last year like
They were not dialed in to last year, like, they were not
dialed in to the CCP mothership and they were doing stuff that was like actually
kind of hilariously messing up the propaganda efforts of the CCP without
realizing it. So to give you like some context on this, one of the
CCP's like large kind of propaganda goals
in the last four years has been framing Korean's narrative that
like, the export controls we have around AI and like all this
gear and stuff that we were talking about. Look, man, those
don't even work. So you might as well just like, give up. Why
don't you just give up on the export controls? We don't even
we don't even care.
So that trying to frame that narrative and they went to like gigantic efforts to do this.
So I don't know if there's this like kind of crazy thing where the Secretary of Commerce
under Biden, Gina Raimondo, visited China in I think August 2023.
And the Chinese basically like timed the launch of the Huawei Mate 60 phone
that had this these chips that were supposed to be made by like export control shit for
right for her visit. So it was basically just like a big like, fuck you, we don't even
give a shit about your export controls, like basically trying to a morale hit or whatever.
And you think about that, right? That's an incredibly
expensive set piece. That's like, you got to coordinate with Huawei, you got to like, get the
the tik tok memes and shit like going going in the right in the right direction, all that stuff. And
all the stuff they've been putting out is around this narrative. Now, fast forward to mid last year,
the CEO of deep sea, the company, back then, it was totally
obscure, like nobody was tracking who they were, they were working in total obscurity.
He goes on this, he does this random interview on Substack.
And what he says is, he's like, yeah, so honestly, like, we're really excited and doing this
AGI push or whatever.
And like, honestly, like, we're really excited and doing this AGI push or whatever. And like, honestly, like money is not the problem for us. Talent is not the problem for us. But like, access to compute like these export controls, man, they do they ever work? That's a real problem for us. Oh, boy. And like nobody noticed at the time. But then but then the whole deep-seek r1 thing blew up in December
and now you imagine like you're the Chinese Ministry of Foreign Affairs like
you've been like you've been putting this narrative together for like four
years and this jackass that nobody heard about five minutes ago basically just
like shits all over and like you're not hearing that line from many more no no
no they've locked that shit down Oh, and actually the funny the funniest part of this there in right when r1 launched. There's a random deep-seek employee
I think his name is like dia guo or something like that. He tweets out
He's like so this is like our most exciting launch of the year. Nothing can stop us on the path to a GI
except access to compute and then literally the dude in Washington, DC,
who works at a think tank on export controls against China,
reposts that on X and goes basically like,
message received.
And so like hilarious for us,
but also like, you know that on the backside,
somebody got
Screamed at for that shit somebody got a magic bust somebody got yeah
Somebody got like taken away or whatever because like it just it just undermined their entire like four-year
Like narrative around these export controls. Wow, but that you're that shit ain't gonna happen again from deep-seek better believe it
It's and that that's part of the problem with like so the Chinese face so many issues
one of them is you know to kind of
Another one is the idea of just waste and fraud right so we have a free market like what that means is you raise from private
capital people who are pretty damn good at assessing shit will like look at your
Your setup and assess whether it's worth, you know backing you for these massive multi-billion dollar deals
In China the state like I mean the stories of waste are pretty insane
They'll like send a billion dollars to like a bunch of yahoo's who will pivot from whatever like I don't know making these widgets
To just like oh now we're like a chip foundry and they have no experience in it
But because of all these subsidies, because of all these opportunities,
now we're going to say that we are.
And then no surprise, two years later, they burn out
and they've just like lit a billion dollars on fire or whatever billion yen.
And like the weird thing is, this is actually working overall,
but it does lead to insane and unsustainable levels of waste.
Like the the Chinese system right now is obviously like they've got their their massive property bubble that they're that's looking
really bad. They've got a population crisis. The only way out for them is the
AI stuff right now. Like they're really the only path for them is that, which is
why they're they're working it so hard. But the the the stories of just like
billions and tens of billions of dollars being lit on fire specifically in the
semiconductor industry, on the in the semiconductor industry, in the AI industry.
That's a drag force that they're dealing with constantly
that we don't have here in the same way.
So it's the different structural advantages and weaknesses of both systems.
And when we think about what do we need to do to counter this,
to be active in this space, to be a live player again,
it means factoring in how like how do you, yeah,
I mean, how do you take advantage
of some of those opportunities
that their system presents that ours doesn't?
When you say be a live player again,
like where do you position us?
It's, I think it remains to be,
so right now this administration's
obviously taking bigger swings.
That-
What are they doing differently?
So, well, I mean mean things like tariffs I mean
they're not shy about trying new stuff and you tariffs are are very complex in
this space like the impact the actual impact of the tariffs and and not
universally good but the on-shoring effect is also something that you really
want so it's a very mixed bag but it's certainly an administration that's like
willing to do high-stakes big moves in a way that other
Administrations haven't and and a time when you're looking at a transformative technology
That's gonna like upend so much about the way the world works. You can't afford to have that mentality
We're just talking about with like the nervous. I mean
You encountered it with the staffers on you know in the when booking the podcast with the presidential
You encountered it with the staffers on you know in the when booking the podcast with the presidential
Cycle right like the kind of like nervous
ANSI staffer who just gotta be controlled and it's gonna be like just so yeah It's like if you like the like, you know wrestlers have that mentality of like just like aggression like like feed in right feed forward
Don't just sit back and like wait to take the punch. It's not like
One of the guys who helped us out on this has the saying he's like
Fuck you. I go first and it's always my turn, right?
That's what success looks like when you actually are managing these kinds of national security issues
The mentality we had adopted was this like sort of siege mentality
Where we're just letting stuff happen to us and we're not feeding in.
That's something that I'm much more optimistic about
in this context.
It's tough too, because I understand people who hear that
and go like, well, look, you're talking about like,
this is an escalatory agenda.
Again, I actually think paradoxically it's not.
It's about keeping adversaries in check
and training them to respect
American territorial integrity integrity American technological sovereignty
Like you don't get that for free. And if you just sit back your that is escalatory
It's just yeah and base
This is basically the the sub-threshold version of like, you know, like the World War two appeasement thing where back
You know Hitler was like was was taken
He was taken taken Austria. He was remilitarizing shit,
he was doing this, he was doing that,
and the British were like,
okay, we're gonna let him just take one more thing,
and then he will be satisfied.
And that just didn't work.
Maybe I have a little bit of Poland, please.
A little bit of Poland.
Maybe the Czechoslovakia is looking awfully mine.
And so this is basically like they fell into that pit like that tar pit back in the day because they're you know,
the peace in our time, right? And to some extent like we've we've still kind of learned the lesson of not letting that happen with territorial boundaries.
But that's big and it's visible and happens on the map and you can't hide it.
Whereas one of the risks, especially with the previous administration was like, there's
these like sub threshold things that don't show up in the news, and that are there that
are calculated like that.
Basically, our adversaries know, because they know history, they know not to give us a Pearl Harbor. They know not to give us a 9-11 because historically
countries that give America a Pearl Harbor end up having a pretty bad time about it. And so why would they give us a reason to come and bind
together against an obvious external like threat or risk when they can just like keep
chipping away at it. This is one of the things like we have to actually elevate
that and realize this is what's happening, this is the strategy, we need to
we need to take that like let's not do appeasement mentality and push it across
in these other domains because that's where the real competition is going on.
That's where it gets so fascinating in regards to social media, because it's imperative that
you have an ability to express yourself.
It's like it's very valuable for everybody.
The free exchange of information, finding out things that you're not going to get from
mainstream media, and it's led to the rise of independent journalism.
It's all great.
But also, you're being manipulated left and right constantly.
And most people don't have the time to filter through it
and try to get some sort of objective sense
of what's actually going on.
It's true. It's like our free speech,
it's like it's the layer where our society figures stuff out.
And if adversaries get into that layer,
they're like almost inside of our brain.
And there's ways of addressing this.
Like one of the challenges obviously is like, so you know, they try, they try to push an extreme opinions
in any direction. And it's, that part is actually it's, it's kind of difficult, because while
the most extreme opinions are like, are also the most likely generally to be wrong, they're
also the most valuable when they're right, because
they tell us a thing that we didn't expect by definition, that's true, and that can really
advance us forward.
And so I mean, the the there are actually solutions to this.
I mean, this this particular thing is isn't an area we were like, too immersed in.
But one of the solutions that has been bandied about
is like, you know, like you might know like polymarket
or prediction markets and stuff like that,
where at least, you know, hypothetically,
if you have a prediction market around like,
if we do this policy, this thing will or won't happen.
That actually creates a challenge around trying
to manipulate that view or that market.
Because what is happening is like if you're an adversary and you want to
not just like manipulate a conversation that's happening in social media, which is cheap,
but manipulate a prediction, the price on a prediction market, you have to buy in,
you have to spend real resources. And if you're to the extent you're wrong, and you're trying to
create a wrong opinion, you're going to lose your resource. So you actually, you actually
can't push too far too many times, or you will just get your money taken away from you.
So I think like that's that's one approach where just in terms of preserving discourse,
some of the stuff that's happening in prediction markets is actually really interesting and really exciting, even in the context of bots
and any eyes and stuff like that.
Hmm.
This is the one way to find truth in the system is find out where people are making money.
Exactly.
Put your money where your mouth is right proof of work like this is that is what just like
the market is theoretically to right.
It's got obviously big big issues
But and can be manipulated in the short term
But in the long run like this is one of the really interesting things about startups, too
Like when you when you run into people in the early days
By definition their startup looks like it's not going to succeed
Right. That is what it means to be a seed stage startup, right?
If it was obvious you were going to succeed you would, you know, the people would have invested. You would have raised more money already.
Yeah, so what you end up having is like
these highly contrarian people who like,
despite everybody telling them that they're gonna fail,
just believe in what they're doing
and think they're gonna succeed.
And that's, I think that's part of what really like
kind of shapes the startup founder's soul
in a way that's really constructive.
It's also something that,
if you look at the Chinese system, is very different. You raise money in very different ways, you're coupled to the
state apparatus, like you're both dependent on it and you're supported by it. But there's just like
a lot of different ways and it makes it hard for Americans to relate to Chinese and vice versa
and understand each other's systems. One of the biggest risks as you're like thinking through
what is your posture going to be relative to these countries
is you fall into thinking that their traditions,
their way of thinking about the world is the same as your own.
And that's something that's been an issue for us with China for a long time
is, you know, hey, they'll liberalize, right?
Like bring them into the World Trade Organization.
It's like, oh, well, actually they'll sign the document,
but they won't actually like live up to any of the commitments.
And it makes appeasement really tempting because you're thinking, oh, they're just like sign the document but they won't they won't actually like live up to any of the commitments and
It's it makes appeasement really tempting because you're thinking. Oh, they're just like us like they're just around the corner They're like if we just like reach out the all branch a little bit further. They're gonna they're gonna come around
It's like a guy who's stuck in the friend zone with a girl
One day she's gonna come around and realize I'm a great catch
One day she's gonna come around and realize I'm a great catch
You keep on trucking buddy one day China's gonna be my bestie we're gonna be besties You just we just need an administration that reaches out to them and just let them know man
There's no reason why she'd be adversaries. We're all just people on planet Earth
Like I I honestly wish that was true
So maybe that's what AI brings about maybe AI
I hope maybe super intelligence realizes. Hey you fucking apes you territorial apes with thermonuclear weapons
How about you shut the fuck up?
You guys are doing the dumbest thing of all time and you're being manipulated by a small group of people that are profiting in
Insane ways off of your
misery. So let's just cut this shit and figure a way to actually equitably
share resources because that's the big thing. You're all stealing from the
earth but some people stole first and those people are now controlling all the
fucking money. How about we stop that? Wow. We just we covered a lot of ground there.
That's what I would do if I was super intelligence.
So that stopped all that.
That actually is like.
So this is not like relevant to the risk stuff or to the whatever at all.
But it's just interesting.
So there's there's actually theories like in the same way
that there's theories around
power seeking and stuff around around super intelligence.
There's theories around like how super intelligences do deals with each other, right? And you actually like
you have this intuition that which is exactly right, which is that, hey, two super intelligences
like actual legit super intelligences should never actually like fight each other destructively
in the real world, right? Like, that seems weird, that shouldn't happen because they're
they're so smart. And in fact, like, there's theories around they can kind of do perfect deals with
each other based on, like, if we're two super intelligences, I can kind of assess, like,
how powerful you are. You can assess how powerful I am. And we can actually, like, we can actually
decide, like, well, if we did fight a war against each other like you would
have this chance of winning I would have that chance of winning and so let's
just instantaneously that there's no benefit in that it also it would know
something that we all know which is the rising tide lifts all boats but the
problem is the people that already have yachts they don't give a fuck about your
boat like hey hey hey that water is In fact, you shouldn't even have water
Well, hopefully it's so positive some right that even they enjoy the benefits
But but I mean you're right
This is the issue right now and one of the like the nice things too is as you as you build up your your rachet of AI
Capabilities it does start to open some opportunities for actual like trust but verify it right which is something that we can't do right now
It's not like with nuclear stockpiles where we've had some success in some context with like enforcing, you know,
treaties and stuff like that, sending inspectors in and all that. With AI right now, like,
how can you actually prove that like some international agreement on the use of AI is
being observed? Even if we figure out how to control these systems, how can we make
sure that, you know that China is baking in
those control mechanisms into their training runs
and that we are and how can we prove it to each other
without having total access to the compute stack?
We don't really have a solution for that.
There are all kinds of programs like this FlexHeg thing,
but anyway, those are not gonna be online by like 2027.
And so one hope is that-
But it's really good that people are working on them
because like for sure you want to you want to like you want to be positioned for catastrophic
success like what if something great happens and like or we have more time or whatever you want to
be working on this stuff that that allows this kind of this kind of control or oversight that
that's kind of hands off where you know in theory you you can give, you can hand over GPUs to an adversary
inside this like box with these encryption things.
The people we've spoken to in the spaces
that actually try to like break into boxes like this
are like, well, probably not gonna work,
but who knows, it might.
Yeah, so the hope is that as you build up
your AI capabilities, basically,
it starts to create solutions. So it starts to create ways for you know, two countries to
Verifiably adhere to some kind of international agreement. Yeah, we're to find like you said like paths for de-escalation
That's the sort of thing that that we actually could get to and that's one of the strong
Positives of where you could end up going that would be what's really fascinating
artificial general intelligence becomes superintell intelligence and it immediately weeds out
all the corruption.
Because, hey, this is the problem.
Like a massive doge in the sky.
Exactly.
We figured it out.
You guys are all criminals and expose it to all the people.
These people that are your leaders have been profiting and they do it on purpose and this
is how they're doing it and this is how they're manipulating you. Yeah, and these are all the lies that they've told I'm sure that list is pretty whoa
It'll be scary if you could x-ray the world right now and like see all the you'd want an MRI
You want to get like down to the tissue?
Yeah, you're right. Yeah, you want to get down to the cellular level but like it
Would be offshore accounts then you start finding
there would be so much like the stuff that comes out you know from just
randomly right just just random shit that comes out like yeah the I forget
that that that like Argentine I think what you were talking about like but the
Argentinian thing like that came out a few years ago around all the oligarchs and the Meryl Streep thing. Yeah, yeah
Yeah, yeah, yeah the the the
Water mat there the laundromat movie there's a you know Panama Papers the Panama Papers. I never saw that no
Oh, it's a good movie. Yeah, is it called the Panama Papers the movie. It's called the laundromat. Yeah. Yeah
Yeah, you remember the Panama Papers, the movie? It's called The Laundromat. Oh, okay. Yeah. You remember the Panama Papers?
Roughly.
Yeah, it's like all the oligarchs stashing their cash in Panama.
Like offshore tax haven stuff.
Yeah.
And someone basically blew it wide open, and so you got to see every oligarch and rich
person's like, you like financial shit, like every once in a while, right?
The world gets just like a flash of like,
oh, here's what's going on to the surface.
And it's like, oh fuck.
And then we all like go back to sleep.
What's fascinating is like the unhideables, right?
The little things that can't help but give away
what is happening.
Like you think about this in AI quite a bit,
you know, some things that are hard for companies to hide
is like they'll have a job posting that they'll put,
they've got to advertise to recruit.
So you'll see like, oh, interesting.
Like, oh, OpenAI is looking to hire some people
from hedge funds.
Yeah.
Hmm.
Like, I wonder what that means.
I wonder what that implies.
Like if you think about all of the leaders
in kind of the AI space,
think about the medallion fund,
for example, right?
This is like super successful hedge fund,
very famous, like what the man who broke the-
The man who broke the market.
The man who broke the market is the famous book
about the founder of the medallion fund.
And like, this is basically like a fund that they make
like ridiculous, like $5 billion returns every year
kind of guaranteed, so much so they have to cap
how much they invest in the market because they would otherwise like move the market too
much like affect it and the fucked up thing about like the way they trade and
this is so this is like 20 year old information but it's still indicative
because like you can't get current information about their strategies but
one of the things that they were the first to kind of go for and figure out is they were like
Okay, are they basically were the first to kind of build what was at the time as much as possible an AI that?
autonomously did trading it at like great speeds and it had like no human oversight and just worked on its own and
What they found was the strategies that were the most successful
Were the ones that were the most successful
Were the ones that humans understood the least?
Because if you have a strategy that a human can understand
Some humans gonna go and figure out that strategy and trade against you
Whereas if you have the kind of the balls to go like oh this thing is doing some weird shit that I cannot understand no matter how hard I try. Let's just fucking YOLO and trust it and like and
make it work. If you have all the stuff debugged and if you have the whole if the whole system
is working right, that's where your biggest successes are. So this kind of strategies
you're talking about? Oh, I mean, like, so like, I, I don't I don't know specific analogy.
Maybe this will this all like, so, so how are AI systems trained today?
Right?
So just as a trading strategy, sorry.
So basically like you bought, as an example, you buy the stock the Thursday after the full
moon and then sell it like the Friday after the new moon or some like random shit like that.
That it's like, why does that even work?
Like, why would that even work?
So to like, to sort of explain why these strategies
work better, if you think about how AI systems
are trained today, you basically, very roughly,
you start with this blob of numbers that's called a model and you
feed it input you get an output if the output you get is no good if you don't
like the output you basically fuck around with all those numbers change
them a little bit and then you try again you're like oh okay that's better and you
repeat that process over and over and over with different inputs and outputs
and eventually those numbers that mysterious ball of numbers starts to
behave well it starts to behave well
It starts to make good predictions or generate good outputs now. You don't know why that is
You just know that it does a good job at least where you've tested it now if you slightly change what you test it on
Suddenly you could discover. Oh shit
It's catastrophically failing at that thing these things are very brittle in that way
And that's part of the reason why chat GPT will just like
completely go on a psycho binge fest every once in a while.
If you give it a prompt that has like too many exclamation
points and asterisks in it or something.
Like these systems are weirdly brittle in that way.
But applied to investment strategies,
if all you're doing is saying like optimize for like,
optimize for returns, give it inputs, give it out, be more money by the end of
the day. It's like an easy goal, like it's a very like clear cut
goal, right? You can give a machine.
So you end up with a machine that gives you these very like it
is a very weird strategy. This ball of numbers isn't human
understandable. It's just really fucking good at making money.
And why is it really fucking good at making money? I mean, it. I mean, it just kind of does the thing and I'm making money
I don't ask too many questions. That's kind of like the so when you try to impose on that system human interpretability
You pay what in the AI world is known as the interpretability tax
basically, you're adding another constraint and
The minute you start to do that you're forcing it to optimize for something other than pure rewards.
Like doctors using AI to diagnose diseases are less effective than the chatbot on its own.
That's actually related, right?
That's related.
If you want that system to get good at diagnosis, that's one thing.
OK, just fucking make it good at diagnosis.
If you want it to be good at diagnosis and to produce explanations that a good doctor...
Convince a doctor.
Yeah, we'll go like, OK, I'll use that.
Well, great. But guess what? to produce explanations that a good doctor, yeah, will go like, okay, I'll use that.
Well, great, but guess what?
Now you're spending some of that precious compute
on something other than just the thing
you're trying to optimize for.
And so now that's gonna come at a cost
of the actual performance of the system.
And so if you are gonna optimize like the fuck
out of making money, you're gonna necessarily
de-optimize the fuck out of anything else,
including being able to even understand
what that system is doing.
And that's kind of like at the heart of a lot
of the kind of big picture AI strategy stuff
is people are wondering like,
how much interpretability tax am I willing to pay here?
And how much does it cost?
And everyone's willing to go a little bit further
and a little further.
And so OpenAI actually had a paper where they,
I guess a blog post where they talked about this and they were
like look
right now we have this this
essentially this like thought stream that our model produces on the way to generating its final output and
that thought stream like
We don't want to touch it to make it like
thought stream, like, we don't want to touch it to make it like interpretable to make it make sense. Because if we do that, then essentially, it'll be optimized to convince us of whatever
the thing is that we want it to do to behave well.
So it's like you've if you've used like one of like an open AI model recently, right,
like oh, three or whatever, it's doing its thinking before it starts like outputting
the answer. And so that thinking is, yeah, we're supposed to like be able to read that and kind of get
it.
But also, we don't want to make it too legible.
Because if we make it too legible, it's going to be optimized to be legible and to be convincing
rather than
to fool us basically.
I mean, yeah, exactly.
Oh, Jesus Christ.
But that's so that's the investment making me less comfortable than I fool us basically. I mean, yeah exactly
But that's so that's the investment making me less comfortable than I thought you would
How bad are they gonna freak us out
More well, I mean, okay, so I do want to highlight so the game plan right now on the positive end Let's see how this works Jesus
Jamie do you feel the same way?
I have articles I didn't bring up that are supporting some of this stuff like today
China quietly made some chip that they shouldn't been able to do because of the sanctions
Oh, and it's basically based off of just sheer will okay, so there's SMIC there's good news on that one at least
this is a kind of a bullshit strategy that they're using so there's okay so
when you make these insane like five minutes read that for people just
listening trying to quietly cracks five an anameter yeah that's it without EUV
what is EUV extreme ultraviolet ultraviolet how SMIC defied the chip sanctions with sheer engineering
Yeah, so this is like um an espionage
so there's
But actually though, so there's a good reason that a lot of these articles are
Making it seem like this is a huge breakthrough. It actually isn't as big as it seems
making it seem like this is a huge breakthrough. It actually isn't as big as it seems.
So, okay, if you want to make really, really, really, really exquisite chips.
Look at this quote.
Moore's law didn't die, Huo wrote.
It moved to Shanghai.
Instead of giving up, China's grinding its way forward layer by layer, pixel by pixel.
The future of chips may no longer be written by who holds the best tools, but by who refuses
to stop building.
The rules are changing and D.U.V. just lit the fuse. Boy. Yeah. So I mean, who wrote that article?
You can, you can, you can. Gizmo China. There it is. Yeah. You can view that as like Chinese
propaganda in a way, actually. So what's actually going on here is, so the Chinese only have these
deep ultraviolet lithography machines
that's like a lot of syllables, but it's just a glorified chip like it's a giant
laser that that zaps your chips to like make the chips when when you're
having them so we're talking about like you do these atomic layer patterns on
the chips and shit and like what this UV thing does is it like fires like a
really high-power laser beam laser beam
Yeah, they attach to the head of sharks that just shoot at the chips. Sorry. That was like an Austin Powers
anyway, they felt like shoot it at the chips and
That causes depending on how the thing is is designed
They'll like have a liquid layer of the stuff that's gonna go on the chip
The UV is really really tight and causes
it exactly causes it to harden. And then they wash off the liquid and they do it all over
again.
Like basically, this is just imprinting a pattern on a chip. So whatever is a tiny printer?
Yeah, so that's it. And so the the exquisite machines that we get to use or that they get
to use in Taiwan are called extreme ultraviolet lithography machines. These are those crazy
lasers.
The ones that China can use, because we've prevented them from getting any of those
extreme ultraviolet lithography machines.
The ones China uses are previous generation machines
called deep ultraviolet.
And they can't actually make chips
as high a resolution as ours.
So what they do is, and what this article is about is,
they basically take the same chip,
they zap it once with DUV, and then they got to pass it through again, zap it again to get closer
to the level of resolution we get in one pass with our exquisite machine.
Now the problem with that is you got to pass the same chip through multiple times, which
slows down your whole process.
It means your yields at the end of the day are lower.
It adds errors.
Yeah.
Yeah. Which makes it more costly. We've known that this is a thing it's called multi-patterning.
It's been a thing for a long time.
There's nothing new under the sun here.
China has been doing this for a while.
But so it's not actually a huge shock that this is happening.
The question is always when you look at an announcement like this,
yields, yields, yields.
How like what percentage of the chips coming out are actually usable and how fast are they coming
out?
That determines is it actually competitive.
And that article too, this ties into the propaganda stuff we were talking about, right?
If you read an article like that, you could be forgiven for going like, oh man, our expert
controls just aren't working, so we might as well just give them up.
When in reality, because you look at the source.
And this is how you know that also this is one
of their propaganda things is you look at Chinese news
sources, what are they saying?
What are the beats that are common?
And you know, just because of the way their media is set up,
totally different from us.
And we're not used to analyzing things this way.
But when you read something in the South China Morning
Post or the Global Times
or Xinhua or in a few different places like this
and it's the same beats coming back,
you know that someone was handed a brief
and it's like, you gotta hit this point,
this point, this point, and yep,
they're gonna find a way to work that
into the news cycle over there.
Jeez.
And it's also like slightly true.
Like yeah, they did manage to make chips at like five nanometers.
Cool.
It's not a lie. It's just it's the same like propaganda technique, right?
You're not most of the time, you're not going to confabulate something out of
nothing. Rather, like you start with the truth and then you push it just a little
bit, just a little bit.
And you keep pushing, pushing, pushing.
Wow. How much is this administration aware of all the things that you're talking about?
So they're actually, they've got some right now they're in the middle of like staffing up some
of the key positions because it's a new administration still and this is such a technical domain.
They've got people there who are like, like at the kind of work level, they have some people.
They have some people now. Yeah, in places like especially in some of the
export control offices now who are some of the best in the business yeah and
and that's that's really important like this is a it's a weird space because so
when you want to actually recruit for for you know government roles in the
space it's really fucking hard because you're competing against like an open AI like
Very like low-range salaries like half a million dollars a year
The government pay scale needless to say is like not where I mean Elon worked for free
He can he can afford to but but still taking a lot of time out of his his day
there's a lot of people like that who are like
You know they they can't justify the cost like, you know, they, they can't justify
the cost. Like they can't afford, they literally can't afford to work for the government. Why
would they? Exactly. So whereas China's like, you don't have a choice, bitch. Yeah. Yeah. And
that's what they say. That's the Chinese word for bitch is really biting. Like if you, if you
translated that, it would be a real, I sure. It's kind of crazy because it seems almost
impossible to compete with that.
I mean, that's like the perfect setup.
If you wanted to control everything
and you wanted to optimize everything for the state,
that's the way you would do it.
Yeah, but it's also easier to make errors
and be wrong-footed in that way.
And also, basically, that system only
works if the dictator at the top is just very competent.
Because the risk always with a dictatorship is like, oh, the dictator turns over and now
it's just a total dumbass.
And now you're the whole thing.
Ben, he surrounds himself.
I mean, look, we just talked about information echo chambers online and stuff.
The ultimate information echo chamber is the one around Xi Jinping right now.
Because no one wants to give him bad news.
Yeah, I'm not gonna.
I don't, you know, like, and so you have this. And this is what you keep seeing, right, is like,
with these, like, like provincial level debt in China, right, which is so awful,
it's like people trying to hide money under imaginary money, money under imaginary mattresses,
and then hiding those mattresses under bigger mattresses until eventually like no one knows
Where the liability is and that and then you get a massive property bubble and any number of other bubbles that are due to
Pop anytime right so longer it goes on like the the the more like stuff gets squirreled away like there's there's actually like a
Story from the Soviet Union. That's that always like gets me which is so
Stalin obviously like
purged and killed like millions of people in the 1930s right so by the 1980s the ruling Politburo
of the Soviet Union obviously like things have been different generations had turned over and
all the stuff but those people the most powerful people in the USSR, could not figure out what had happened
to their own families during the purchase.
The information was just nowhere to be found
because the machine of the state was just so aligned
around, we just gotta kill as many fucking people as we can
and turn it over and then hide the evidence of it and then kill the people who killed
the people and then kill those people who killed those people like it also
wasn't just kill the people right it was like a lot of like kind of Gulag
Archipelago style it's it's about labor right because the fundamentals of the
economy are so shit that you basically have to find a way to justify putting
people in labor camps and like that's right but it was very much like you grind mostly or largely,
you grind them to death and basically they've gone away
and you burn the records of it happening.
So literally the whole town, right?
That disappeared, like people who are like,
there's no record or there's like,
or usually the way you know about it is there's like one dude.
And it's like, this one dude
has a very precarious escape story.
And it's like, if literally this dude didn't get away,
you wouldn't know about the entire town that was like wiped out yeah it's
crazy Jesus Christ yeah the stuff that like apart from that though communism
works communism great it just hasn't been done right that's right I feel like
we could do it right and we have a ten page plan that we have we came real
close came real close so close yeah yeah I mean that's what the blue no matter
who people don't really totally understand.
Like we're not even talking about political parties.
We're talking about power structures.
We came close to a terrifying power structure and it was willing to just do whatever it
could to keep it rolling and it was rolling for four years.
It was rolling for four years without anyone at the helm.
Show me the incentives, right?
I mean, that's always the question.
Like, yeah, one of the things is to like when you have such a big structure
that's overseeing such complexity.
Right. Obviously, a lot of stuff can hide in that structure.
And it's it's actually it's it's not unrelated to the whole AI picture.
Like you need there's only so much compute that you have at the top
of that system that you can spend, right?
As the president, as a cabinet member, like whatever.
You can't look over everyone's shoulder and do their homework.
You can't do founder mode all the way down in all the branches and all the like action
officers and all that shit.
That's not going to happen, which means you're spending five seconds thinking about how to
unfuck some part of the government, but then like, you know corrupt people who run their own fiefdoms there spend every day
It's like their whole life survived, you know, like justify themselves. Yeah. Yeah, that's the USA dilemma. Yeah
Yeah, is there uncovering? Oh, it's just insane amount of NGOs. Like where's this going?
We talked about this the other day, but India has an NGO for every 600 people
Wait, what yes need more NGOs. There's
3.3 million NGOs
In India do they do they like bucket like what what are the categories that they fall into like who fucking knows?
That's part of the problem that one of the things that Elon had found is that there's money that just goes out with no receipts and it's billions of dollars.
We need to take that further.
We need an NGO for every person in India.
We will get that eventually.
It's the exponential trend.
It's just like AI, the number of NGOs is doubling every year.
We're making incredible progress in bullshit.
The NGO scaling law, the bullshit scaling law. Well it's just
unfortunately it's Republicans doing it right so it's unfortunately the
Democrats are gonna oppose it even if it's showing that there's like insane
waste of your tax dollars. I thought some of the Doge stuff was pretty
bipartisan's like what like there's congressional support at least on both
sides no? Well sort of you know I think the real issue is in dismantling
a lot of these programs that you can point to some good
some of these programs do.
The problem is some of them are so overwhelmed
with fraud and waste that it's like to keep them active
in the state they are.
What do you do?
Do you rip the Band-Aid off and start from scratch?
What do you do with the Department of Education?
Do you say, why are we number 39 when we were number one? Like what did you guys do with all that money?
Yeah, so there's a problem. There's this idea in software engineering actually
You're talking to one of our employees about this which is like
Refactoring right so when you're writing like a bunch of software it gets really really big and hairy and complicated and there's all kinds of like
Dumbass shit and there's all kinds of waste that happens in that in that code
base. There's this thing that you do every, you know, every like few months, is you just
think called refactoring, which is like you go like, Okay, we have, you know, 10 different
things that are trying to do the same thing. Let's get rid of nine of those things, and
just like rewrite it as the one thing. So's get rid of nine of those things and just like
rewrite it as the one thing. So there's like a cleanup and refresh cycle that has
to happen. Whenever you're developing a big complex thing that does a lot of
stuff, the thing is like the the US government at every level has basically
never done a refactoring of itself. And so the the way that problems get solved is
you're like, well, we need to do this new thing. So we're just gonna like stick on
another appendage to the beast and and get that appendage to do that new thing.
And like that's been going on for 250 years. So we end up with like this beast
that has a lot of appendages, many of which do incredibly duplicate of and wasteful stuff that if you were a software
engineer just like not politically just objectively looking at that as a system
you'd go like oh this is a catastrophe and like we have processes that the
industry we understand how what needs to be done to fix that you have to refactor.
But they haven't done that, hence the $36 trillion of debt.
It's a problem too, though, in all, like, when you're a big enough organization, you
run into this problem, like Google has this problem, famously, Facebook, like we have
friends like, like Jason, so Jason's the the guy you spoke to about that, like, so so he's,
he's like a startup engineer.
So he works in like relatively small code bases
and he like, you know, can hold the whole code base
in his head at a time.
But when you move over to, you know, Google, to Facebook,
like all of a sudden this gargantuan code base
starts to look more like the complexity of the US government,
just like very, you know, very roughly in terms of scale.
So now you're like, okay, well, we want to add functionality. And so we want
to incentivize our teams to build products that are going to be valuable. And the challenge
is the best way to incentivize that is to give people incentives to build new functionality,
not to refactor. There's no glory. If you work at Google, there's no glory in refactoring.
If you work at Meta, there's no glory in refactoring. Like, Friends of Ours is what they're- Like there's no promotion, right. There's no glory. If you work at Google, there's no glory in refactoring. If you work at Meta, there's no glory in refactoring.
Like, a friend of ours-
Like there's no promotion, right?
There's no-
Exactly.
You have to be a product owner.
So you have to like invent the next Gmail.
You got to invent the next Google Calendar.
You got to do the next messenger app.
That's how you get promoted.
And so you've got like this attitude.
You go into there and you're just like,
let me crank this stuff out and like,
try to ignore all the shit in the code base.
No glory in there. And what you're left with is this like a this Frankenstein monster of a code base that you just keep stapling more shit on to.
And then B, this massive graveyard of apps that never get used. This is like the thing Google is famous for.
If you ever see like the Google graveyard of apps, it's like all these things oh, yeah I guess I kind of remember Google me somebody made their career off of launching that shit and then peaced out and it died
That's that's like the incentive structure at Google unfortunately
And it's it's also kind of the only way to do I mean or maybe it's probably not but
In the world where humans are doing the oversight. That's your limitation, right?
You got some people at the top who have a limited bandwidth and compute that they can dedicate to like hunting down the problems
AI agents might actually solve that right you could like actually have the you know a sort of
Autonomous AI agent that is the autonomous CEO or something go into an organization
Uproot all the things and do that refactor you could get way more efficient organizations out of that
I mean that like thinking about like government corruption and waste and fraud.
That's the kind of thing where those sorts of tools could be radically empowering, but
you got it.
You know, you got to get them to work right and for you.
We've given us a lot to think about.
Is there anything more?
Should we wrap this up?
If we've made you sufficiently uncomfortable? I'm super uncomfortable
Was the very uneasy was the butt tap too much at the beginning or we know it's fine. No, that's fine. All of it was weird
It's just you know, I always try to look at some
Non-cynical way out of this
Well, the thing is like there are paths out we talked about this and the fact that a lot of these problems are just us tripping on our own
feet. So if we can just like, unfuck ourselves a little bit,
we are actually we can unleash a lot of this stuff. And as long
as we understand also, the bar that security has to hit, and
how important that is, like, we actually can put all this stuff
together, we have the capacity. It all exists.
It just needs to actually get aligned and around an initiative,
and we have to be able to reach out and touch.
On the control side, there's also a world where,
and this is actually, like, if you talk to the labs,
this is what they're actually planning to do,
but it's a question of how methodically and carefully they can do this.
The plan is to ratchet up capabilities, and then scale, in other words.
And then as you do that, you start to use your AI systems,
your increasingly clever and powerful AI systems,
to do research on technical control.
So you basically build the next generation of systems,
you try to get that generation of systems to help you
just inch forward a little bit more on the capability side.
It's a very precarious balance,
but it's something that like at least isn't insane on the face of it, and fortunately, I
mean, is the default path, like the labs are talking about that kind of
control element as being a key pillar of their strategy. But these conversations are not
happening in China. So what do you think they're doing to keep AI from uprooting
their system? So that's interesting. There's... Because I would imagine they don't want to lose control.
Right.
There's a lot of ambiguity and uncertainty about what's going on in China.
So there's been a lot of like track 1.5, track 2 diplomacy, basically where you have, you
know, non-government guys from one side talk to government guys from the other side or
talk to non-government from the other side and kind of start to align on like, okay,
what do we think the issues are?
You know, the Chinese are, there there a lot of like freaked out Chinese researchers
and have come out publicly and said, hey, like, we're really concerned
about this whole loss of control thing.
There are public statements and all that.
You also have to be mindful that any statement the CCP puts out
is a statement they want you to see. Right.
So when they say like, oh, yeah, we're really worried about this thing.
It's genuinely hard to assess what that even means.
But they're like, as you as you start to build these systems
We expect you're gonna see some evidence of this shit before and it's not necessarily
It's not like you're gonna build the system necessarily and have it take over the world like what we see with agents
Yeah, so I was actually gonna add it. There's really really good point and and something where like
Yeah, so I was actually gonna add to that. There's a really, really good point and something where
open source AI is like,
even could potentially have an effect here.
So a couple of the major labs,
like Open AI Anthropic, I think,
came out recently and said,
look, we're on the cusp,
our systems are on the cusp
of being able to help a total novice,
like somewhat no experience,
develop and deploy and release a known biological threat. And that's like, that's something we're
going to have to grapple with over the next few months. And eventually, like, capabilities like
this, not necessarily just biological, but also cyber and other areas, are going to come out in
open source. And when they come out in open source,
we might.
Basically for anybody to download.
For anybody to download and use.
When they come out in open source,
you might actually start to see some things happen,
like some incidents, like some major hacks
that were just done by a random motherfucker
who just wants to see the world burn,
but that wakes us up to like, oh shit, these things actually are powerful. I think one of the aspects also here is we're
still in that post Cold War honeymoon, many of us right in that mentality, like not everyone
is like wrap their heads around this stuff. And the like, what needs to happen is something that makes us go like, oh damn, we act like we weren't even really trying this entire time.
Because this is like this is the the 911 effect. This is the Pearl Harbor effect.
Once you have a thing that aligns everyone around like, oh shit, this is real. We actually need to do it.
And we're freaked out, we're actually safer.
We're safer when we're all like, okay, something important needs to happen.
Right.
Instead of letting them just slowly chip away.
Exactly.
And so we like, we need to have some sort of shock, and we probably will get some kind
of shock like over the next few months, the way things are trending.
And when that happens, then, but I mean, like it's
Or years if that makes you feel better. Or it's or yours if that makes you feel better. But because you have the potential for this open source like it's
probably gonna be like a survivable shock right but but still a shock and
so let us actually realign around like okay let's actually fucking solve some
problems for real and so putting together the groundwork right is what
we're doing
around like, let's, let's pre think a lot of this stuff so that, like if and when the
shock comes, we have a break glass plan, we have a we have a plan.
And the loss of control stuff is similar. Like, so one interesting thing that happens
with AI agents today is they'll like, they'll get any. So an AI agent will take a complex
task that you give it, like
find me, I don't know, the best sneakers for me online, some shit like that, and they'll
break it down into a series of sub-steps.
And then each of those steps, it'll farm out to a version of itself, say, to execute autonomously.
The more complex a task is, the more of those little sub-steps there are in it.
And so you can have an AI agent that nails like 99% of those steps. But if it screws up just one,
the whole thing is a flop, right. And so if you think about
like the sort of like loss of control scenarios that a lot of
people look at are autonomous replication, like the model gets
access to the internet copies itself onto servers and all that
stuff. Those are very complex movements. If it screws
up at any point along the way, that's a tell, like, oh shit, something's happening there.
And you can start to think about like, okay, well, what went wrong? We get another do,
we get another try, and we can kind of learn from our mistakes. So there is this sort of
like this picture, you know, one camp goes, oh, well, we're going to kind of make the
super intelligence in a vat, and then it explodes out and we lose control over it.
That doesn't necessarily seem like the default scenario right now.
It seems like what we're doing is scaling these systems.
We might unhobble them with big capability jumps, but it's also there's a component
of this that is a continuous process that lets us kind of get our arms around it in
a more staged way. That's another thing that I think is in our favor that we didn't expect before
as a field basically. And I think that's a good thing. That helps you kind of detect
these breakout attempts and do things about them.
All right. I'm going to bring this home. I'm freaked out. So thank you. Thanks for
trying to make me feel better. I don't think you did. But
I really appreciate you guys and appreciate your perspective because it's very important.
And it's very illuminating. You know, it really gives you a sense of what's going on. And
I think one of the things that you said that's really important is like, it sucks that we
need a 9-11 moment or a Pearl Harbor moment to realize what's happening. So we all come
together, but hopefully slowly,
but surely through conversations like this,
people realize what's actually happening.
You need one of those moments like every generation.
Like that's how you get contact with the truth.
And it's like, it's painful,
but like the lights on the other side.
Thank you.
Thank you very much.
Thank you.
Bye everybody.