Lex Fridman Podcast - #407 – Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI
Episode Date: December 29, 2023Guillaume Verdon (aka Beff Jezos on Twitter) is a physicist, quantum computing researcher, and founder of e/acc (effective accelerationism) movement. Please support this podcast by checking out our sp...onsors: - LMNT: https://drinkLMNT.com/lex to get free sample pack - Notion: https://notion.com/lex - InsideTracker: https://insidetracker.com/lex to get 20% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil EPISODE LINKS: Guillaume Verdon Twitter: https://twitter.com/GillVerd Beff Jezos Twitter: https://twitter.com/BasedBeffJezos Extropic: https://extropic.ai/ E/acc Blog: https://effectiveaccelerationism.substack.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:18) - Beff Jezos (19:16) - Thermodynamics (25:31) - Doxxing (35:25) - Anonymous bots (42:53) - Power (45:24) - AI dangers (48:56) - Building AGI (57:09) - Merging with AI (1:04:51) - p(doom) (1:20:18) - Quantum machine learning (1:33:36) - Quantum computer (1:42:10) - Aliens (1:46:59) - Quantum gravity (1:52:20) - Kardashev scale (1:54:12) - Effective accelerationism (e/acc) (2:04:42) - Humor and memes (2:07:48) - Jeff Bezos (2:14:20) - Elon Musk (2:20:50) - Extropic (2:29:26) - Singularity and AGI (2:33:24) - AI doomers (2:34:49) - Effective altruism (2:41:18) - Day in the life (2:47:45) - Identity (2:50:35) - Advice for young people (2:52:37) - Mortality (2:56:20) - Meaning of life
Transcript
Discussion (0)
The following is a conversation with Guillaume Verdun, the man behind the previously anonymous account,
based Beth J. Zos on X. These two identities were merged by a doxing article in Forbes titled
Who Is Based Beth J. Zos, the leader of the tech elites E. Ack Movement.
So let me describe these two identities that coexist in the mind of one human.
So let me describe these two identities that coexist in the mind of one human.
Identity number one, Guillaume, is a physicist, applied mathematician,
and quantum machine learning researcher and engineer, receiving his PhD in quantum machine learning, working at Google and quantum computing, and finally launching
his own company called Extropic that seeks to build physics based computing hardware
for generative AI.
Identity number two, we have J-Zels on X, is the creator of the effective acceleration
of this movement, often abbreviated as E-AC. That advocates for propelling rapid technological
progress as the ethically optimal course of action for humanity. For example, as proponents believe that progress in AI is a great social equalizer, which
should be pushed forward.
EAC followers see themselves as a counterweight to the cautious view that AI is highly unpredictable,
potentially dangerous, and needs to be regulated.
They often give their opponents the labels of
quote, doomers or decals, short for deceleration, as Beth himself put it,
Yac is a mimetic optimism virus. The style of communication of this movement
leans always toward the memes and the laws. But there is an intellectual foundation that we
explore in this conversation. Now, speaking of the meme, I am to a kind of aspiring connoisseur
of the absurd. It is not an accident that I spoke to Jeff Bezos and Beth Jaisos back to back.
As we talk about, Beth Admire's Jeff
is one of the most important humans alive,
and I admire the beautiful absurdity and the humor of it all.
And now, a quick few second mention of each sponsor.
Check them out in the description.
It's the best way to support this podcast.
We got element for hydration.
The thing I'm drinking right now.
Notion for team collaboration.
Insider tracker for biological data that leads to your well-being.
And AG1 for my daily nutritional health.
Choose wisely, my friends.
Also, if you want to work with our amazing team,
where I was hiring, go to LexFreement.com slash hiring.
Or if you want to just get in touch with me for whatever hiring, go to LexFreement.com slash hiring or if you want to just get in touch with me for whatever reason, go to LexFreement.com slash
contact and now onto the full ad reads as always, no ads in the middle. I try to make these
interesting but if you must skip them friends, please still check out our sponsors. I enjoy
their stuff. Maybe you will too. This episode is brought to you by element, electrolyte, drink,
mix. It's got sodium potassium, magnesium. I drink it so much, so many times a day. It's
really the foundation of my one meal a day lifestyle. I eat almost always one meal a day
in the evening. So I fast, and I really enjoy that.
Everything it does, for me, I recommend everybody at least try it.
Intimate and fasting taken to the daily extreme of fasting for 23, 24 hours, whatever it is.
And for that, you have to get all the electrolytes, right?
You have to drink water, but not just drink water.
You have to drink water but not just drink water you have to drink water couples with
sodium and sometimes
Getting the magnesium part and the potassium part right is tricky, but it really important so that you feel good and
That's what element does and it makes it delicious. My favorite flavor is watermelon salt
Get a sample pack for free with any purchase try it at drinkelement.com slash
likes. This shows also brought to you by Notion, a note taking and team collaboration tool. I've used
them for a long, long time for note taking, but it's also very useful for note taking and all kind
of collaborative note taking in the team environment. And they integrate the whole AI thing, a little lamb thing, well.
So you can use it to summarize whatever you've written.
You can expand it, you can change the language style, and how it's written.
Just all the things that large language models should be able to do
are integrated really, really, really well.
I think of human AI collaboration, not just as a boost for productivity at this time,
but as a kind of learning process that it takes time to really understand what AI is good
at and not.
And that is going to evolve continuously as AI gets better and better and better.
It's like almost watching a child grow up or something like this.
Your fine tuning, what it means to be a good parent as a child goes up.
In the same way, your fine tuning, what it means to be a good, effective human as the AI grows up.
And so you should use a tool that's part of your daily life to interact with AI while
being productive, but also learning what is good at?
What are the ways that can integrate it into my life to make me more productive, but not
just like in terms of shortening the time it takes to do a task, but being the fuel, the
creative fuel for the genius that is you.
So, Notion AI can now give you instant answers
to your questions using information from across your
WikiProjects docs, meeting notes, try Notion AI for free
when you go to Notion.com slash Lex.
That's all lowercase Notion.com slash Lex,
to try the power of Notion AI today.
This show is also brought to you by Insight Tracker,
a service I use to make sense of the biological data
that comes from my body, blood data, DNA data,
fitness tracker data, all of that to make me lifestyle,
recommendations, diet, stuff too.
There's all this beautiful data, which you give it to super intelligent
computational systems to process and to give us in a human interpretable way,
recommendations on how to improve our life.
And I don't just mean optimize life, because I think a perfect life is not the
life you want. What you want is
complicated rollercoaster of a life, but one that is optimized in certain aspects of health,
well-being, you know, energy, but not just optimal in this cold clinical sense.
Anyway, that's a longer conversation.
Probably one I'll touch on.
Maybe when I review Brave New World
or in other conversations I have in the podcast,
anyway, get special savings for a limited time
when you go to insidetracker.com slash Lex.
This show is also brought to you by AG11 the thing I drink twice a day and it
brings me much joy it's green is delicious it's got a lot of vitamins and
minerals it's basically just an incredible super powered multivitamin I enjoy
it a lot of my friends enjoy it it's a thing that makes me feel like home when I'm
traveling and I get one of the travel packs.
The things that consume daily are pretty simple.
We're talking about the electrolytes with element,
AG1 for the vitamins and minerals, then fish oil, and then just the good healthy diet.
fish oil and then just a good healthy diet. Low carb, but either ultra very low carb, so just meat or meat is a veggies.
But I'm not very strict about that kind of stuff.
Just know that I feel good while I'm on a low carb.
And so all of that combined with fasting and rigorous, sometimes crazy routines of work, some mental struggle and physical work, you know,
running and all that kind of stuff, Jiu-Jitsu, training, sprints, all working out, lifting,
heavy, all that kind of stuff.
You have to make sure you have the basic nutrition stuff, right?
And that's what age you want to ask for me.
Maybe it will do that for you.
They'll give you a one month supply
of fish oil when you sign up at drinkag1.com slash Lex.
This is the Lex Friedman podcast.
To support it, please check out our sponsors
in the description.
And now to your friends, here's Guillaume Verdone.
Here's Guillaume Verdun. Let's get the facts of identity down first.
Your name is Guillaume Verdun, Gille, but you're also behind the anonymous account on X called
based buff J-Zels. So first, Guillome Verdun, you're a quantum computing guy, physicist, applied
mathematician, and then based buff J-Zels is basically a meme account that started a movement
with a philosophy behind it. So maybe he's just can you linger on who these people are in terms of characters, in terms of
communication styles, in terms of philosophies? I mean, with my main identity, I guess,
ever since I was a kid, I wanted to figure out a theory of everything to understand the universe.
a theory of everything to understand the universe. And that path led me to theoretical physics eventually,
right, trying to answer the big questions of,
why are we here? Where are we going, right?
And that led me to study information theory
and try to understand physics from the lens of information theory.
Understand the universe as one big computation.
And essentially after reaching a certain level,
studying black hole physics,
I realized that I wanted to not only understand how the universe computes
but sort of compute like nature,
and figure out how to build and apply computers
that are inspired by nature.
So, you know, physics-based computers.
And that sort of brought me to quantum computing
as a field of study to, first of all, simulate nature.
And in my work, it was to, first of all, simulate nature.
And in my work, it was to learn representations of nature that can run on such computers.
So if you have AI representations that think like nature,
then they'll be able to more accurately represent it.
At least that was the thesis that brought me
to be an early player in the field called Quantum Machine Learning,
so how to do machine learning on quantum computers.
And really sort of extend notions of intelligence
to the quantum realm.
So how do you capture and understand quantum
mechanical data from our world? And how do you learn quantum mechanical representations
of our world? On what kind of computer do you run these representations and train them?
How do you do so? And so that's really sort of the questions I was looking to answer,
And so that's really sort of the questions I was looking to answer because ultimately I had a sort of crisis of faith.
Originally I wanted to figure out, you know, as every physicist does at the beginning of
their career, a few equations that describe the whole universe, right, and sort of be the
hero of the story there.
But eventually I realized that actually
augmenting ourselves with machines, augmenting our ability to perceive
predicting control our world with machines is the path forward.
And that's what got me to leave theoretical physics and go into quantum computing and quantum machine learning. And during those years, I thought that there was still peace missing. There was a
piece of our understanding of the world and our our way to compute and our way to think about the
world. And if you look at the physical scales, right, at the very small scales, things are
Right? At the very small scales, things are quantum mechanical.
Right? And at the very large scales, things are deterministic.
Things have averaged out. Right?
I'm definitely here in this seat.
I'm not in a superposition over here and there.
At the very small scales, things are in superposition.
They can exhibit interference effects.
But at the mesoscales, right, the scales that matter for
day-to-day life, you know, the scales of proteins, of biology, of gases, liquids, and so
on, things are actually thermodynamical, right, they're fluctuating. And after I guess about eight years in quantum computing and quantum machine learning,
I had a realization that, you know, I was looking for answers about our universe by studying the
very big and the very small, right? I was, I did a bit of quantum cosmology. So that's studying
the cosmos, where it's going, where it came from.
You study black hole physics.
You study the extremes in quantum gravity.
You study where the energy density is sufficient
for both quantum mechanics and gravity to be relevant.
Right?
And the sort of extreme scenarios are black holes
in the very early universe. And so there's the sort of extreme scenarios or black holes in the very early universe.
So there's the sort of scenarios that you study,
the interface between quantum mechanics and relativity.
And really I was studying these extremes to understand
how the universe works and where is it going,
but I was missing a lot of the meat in the middle, if you will, right?
Because day-to-day quantum mechanics is relevant, and the cosmos is of physics that is most relevant is thermodynamics,
right?
Out of equilibrium, thermodynamics.
Because life is, you know, a process that is thermodynamical and it's out of equilibrium.
We're not, you know, just a soup of particles at equilibrium with nature.
We're sort of coherent state trying to maintain itself by acquiring
free energy and consuming it.
And that's sort of, I guess, another shift, and I guess my faith in the universe happened
towards the end of my time at Alphabet. And I knew I wanted to build, well, first of all, a computing paradigm based on this
type of physics.
But ultimately, just by trying to experiment with these ideas applied to society and economies and much of what we see around us,
you know, I started an anonymous account
just to relieve the pressure, right?
That comes from having an account
that you're accountable for everything you say on.
And I started an anonymous account
just to experiment with ideas, originally, right?
Because I didn't realize how much I was restricting my
space of thoughts until I sort of had the opportunity to let go,
in a sense, restricting your speech
back propagates the restricting your thoughts, right?
And by creating an anonymous account,
it seemed like I had unclamp some variables in my brain
and suddenly could explore a much wider parameter space
of thoughts.
Just the longer on that, isn't that interesting?
That one of the things that people often talk about
is that when there's pressure and constraints on speech, it somehow leads to
constraints on thought. Even though it doesn't have to, we can think it
thoughts inside our head, but somehow it creates these walls around thought.
That's sort of the basis of our movement is we were seeing a tendency towards constraint reduction or suppression
of variance in every aspect of life, whether it's thought how to run a company, how to do AI research. In general, we believe that maintaining variance ensures that the system is adaptive,
maintaining healthy competition in marketplaces of ideas, of companies, of products,
of cultures, of governments, of currencies, is the way forward, because the system
always adapts to assign resources to the universe and seeks to grow.
Right? And that growth is fundamental to life.
And you see this in the equations actually, a vital equilibrium thermodynamics.
You see that paths of trajectories of configurations
matter that are better at acquiring free energy
and dissipating more heat are exponentially more likely.
So the universe is biased towards certain futures,
and so there's a natural direction
where the whole system wants to go.
So the second law of thermodynamics
is that the entropy is always increasing.
The universe is tending towards equilibrium,
and you're saying there's these pockets that have complexity and are out of equilibrium.
You said that thermodynamics favors the creation of complex life that increases its capability to
use energy to offload entropy, to offload entropy. So you have pockets of non-entropy that tend
the opposite direction. Why is that intuitive to you that is natural
for such pockets to emerge?
Well, we're far more efficient at producing heat
than let's say just a rock with a similar mass
as ourselves, right?
We acquire free energy, we acquire food
and we're using all this electricity for our operation.
And so the universe wants to produce more entropy, and by having life go on and grow, it's
actually more optimal at producing entropy, because it will seek out pockets of free energy and burn it for
its sustenance and further growth.
And that's sort of the basis of life.
And I mean, there's Jeremy England at MIT who has this theory that I'm a proponent of, that life emerged because of this sort of property.
And to me, this physics is what governs the mezzo scales. And so it's the missing piece
between the quantum and the cosmos. It's the middle part, right? The reminionomics, rules
thermodynamics, rules, the mesoscales. And to me, both from a point of view of designing
or engineering devices that harness that physics
and trying to understand the world
through the lens of thermodynamics
has been sort of a synergy between my two identities
over the past year and a half now.
And so that's really how the two identities emerged. One was kind of
you know, I must decently respect its scientists and I was going towards doing a startup in the space
and trying to be a pioneer of a new kind of physics-based AI. And as a dual to that, I was sort of experimenting
with philosophical thoughts, you know, from a physicist standpoint, right?
And ultimately, I think that around that time, it was like late 2021, early 2022, I think
there's just a lot of pessimism about the future in general
and pessimism about tech.
And that pessimism was sort of virally spreading because it was getting algorithmically amplified
and, you know, people just felt like the future is going to be worse than the present.
And to me, that is a very fundamentally destructive force in the universe, is this sort of doom mindset,
because it is hypersitious, which means that if you believe it, you're increasing the likelihood
of it happening. And so felt a responsibility to some extent to make people aware of the trajectory of
civilization and the natural tendency of the system to adapt towards its growth and
sort of that actually the laws of physics say that the future is going to be better
and grander statistically, and we can make it so.
And if you believe in it, if you believe that the future would be better and you believe
you have agency to make it happen, you're actually increasing the likelihood of that
better future happening.
And so I sort of felt the responsibility
to sort of engineer a movement of viral optimism
about the future and build a community
of people supporting each other to build and do hard things,
do the things that need to be done for us
to scale up civilization.
Because at least to me, I don't think stagnation or slowing down is actually an option.
Fundamentally, life and the whole system,
our whole civilization wants to grow.
And there's just far more cooperation
when the system is growing,
rather than when it's declining.
And you have to decide how to split the pie.
And so I've balanced both identities so far, but I guess recently the two have been merged more or less without my consent.
without my consent. So you said a lot of really interesting things there. So first, representations of nature. That's something that first drew you in to try to
understand from a quantum computing perspective is like how do you
understand nature? How do you represent nature in order to understand it in order
to simulate it in order to do something with it? So it's a question of
representations. And then there's that
leap you take from the quantum mechanical representation to the what you're calling
muscle scale representation where the thermodynamics comes into play, which is a way to
represent nature in order to understand what life, human behavior, all this kind of stuff that's
happening here on earth that seems interesting to us. Then there's the word hyperstition. So some ideas as opposed both pessimism and
optimism are such ideas that if you internalize them, you in part make that idea a reality.
So both optimism and pessimism have that property. I would say that probably a lot of ideas have that property, which is one of the interesting things about
humans. And you talked about one interesting difference also between the sort of the
guillom, the gill of front end and the base of Jezebel's back end is the communication styles. Also, that you
were exploring different ways of communicating that can be more viral in the way that we
communicate in the 21st century. Also, the movement that you mentioned that you started,
it's not just a meme account, but there's also a name to it called the Effective Accelerationism
E-E-E-A-C, a play, a resistance to the Effective Ultrism movement, also an interesting one that
I'd love to talk to you about, the tensions there. Okay, and so then there was a merger, a get
merge, and the personalities recently, without your consent, like you said,
some journalists figured out that you're one and the same. Maybe you could talk about that
experience. First of all, like what what's the story of the merger of the two? Right.
So I wrote the manifesto with my co-founder of EAC, an account named
Bayes Lord, still anonymous, luckily, and hopefully forever. So it was based buff,
Jezzos and and based, like Bayesian, Bayes Lord, like Bayesian, Bayes Lord, Bayes, Bayes Lord.
Okay. And so we should say from now on, when you say E-Ak, you mean E slash A-C-C, which
stands for effective accelerationism.
That's right.
And you're referring to a manifesto written on, I guess, Substack.
Are you also Bayes Lord?
No.
Okay, it's a different person.
Yeah.
Okay.
All right, well, there you go. Wouldn't they'd be funny fun. I'm based Lord.
That'd be amazing.
So
originally wrote the manifesto around the same times I
founded
this company and I
worked at
Google X or just X now or alphabet X now that there's another X.
And there, the baseline is secrecy.
You can't talk about what you work on even with other Googlers or externally.
So that was deeply ingrained in my way to do things, especially in deep tech that has geopolitical impact.
Right?
And so I was being secretive about what I was working on.
There was no correlation between my company and my main identity publicly.
And then not only did they correlate that, they also correlated my main identity and this account.
So, I think the fact that they had docs, the whole Guillaume complex, and they were, the journalists,
you know, reached out to actually my investors, which is pretty scary.
You know, when you're a startup entrepreneur, you don't really have bosses except for your investors, right?
And investors ping me like,
hey, this is gonna come out.
They've figured out everything.
What are you gonna do, right?
And so I think at first they had a first reporter
on the Thursday and they didn't have all the pieces together, but then they looked at their notes across the organization and
they censor fused their notes. And now they had way too much. And that's when I
got worried because they said it was of public interest. And in general,
luckily, it said censor fused. Like it's some of the giant neural network
operating in distributed distributed way.
We should also say that the journalist used, I guess at the end of the day, audio-based
analysis of voice, comparing voice of what talks you've given in the past, and then voice
on ex-spaces.
Yep.
Okay.
So, and that's where the primarily the match was happened.
Okay.
Continue.
The match, but you know, they scraped, you know, SEC filings, the looked at my private
Facebook account and so on.
So they did some digging.
Originally, I thought that doxing was illegal, right?
But there's this weird threshold when it becomes of public interest to know someone's identity.
And those were the keywords that bring the alarm bells for me when they said, because
I had just reached 50k followers. Allegedly, that's a public interest. And so where do we draw the line?
And when is it legal to dox someone?
The word dox.
Maybe you can educate me.
I thought doxing generally refers to if somebody's physical
location is found out, meaning like where they lived. So we're referring to the more general concept
of revealing private information that you don't want revealed. That's what you mean by
doxing.
I think that, you know, for the reasons we listed before having an anonymous account is
a really powerful way to keep the powers that be in check.
You know, we were ultimately speaking truth to power, right?
I think a lot of executives and AI companies really cared what our community thought about any move they may take. And now that my identity is revealed, now they know where to apply pressure to silence
me or maybe the community. And to me, that's really unfortunate because, again, it's so
important for us to have freedom of speech, which induces freedom of thought,
and freedom of information propagation, right,
on social media, which thanks to Elon purchasing,
Twitter now X, we have that.
And so to us, we wanted to call out certain maneuvers
being done by the incumbents in AI as not what it may seem on the surface. We were calling out how certain proposals might be useful for
regular tour capture. And how the dumerism mindset was maybe instrumental to those ends.
And I think, you know, we should have the right to point that out
and just have the ideas that we put out evaluated for themselves, right?
That ultimately that's why I created an anonymous account.
It's to have my ideas evaluated for themselves, unorrelated from my track record, my job, or status
from having done things in the past.
And to me, start an account from zero
to a large following in a way that wasn't dependent
on my identity and or achievements, you know, that was, that
was very fulfilling, right?
It's kind of like new game plus in a video game.
You restart the video game with your knowledge of how to beat it, maybe some tools, but you
restart the video game from scratch, right?
And I think to have a truly efficient marketplace of ideas where we can evaluate ideas, however
off the beaten path they are, we need the freedom of expression.
And I think that anonymity and pseudonyms are very crucial to having that efficient marketplace
of ideas for us to find the the optimal of all sorts of ways to organize ourselves.
If we can't discuss things, how are we going to converge on the best way to do things?
So it was it was disappointing to hear that I was getting docks, then I wanted to get in front of it
because I had a responsibility for for for my company. And so I you know we ended up disclosing
for my company. And so we ended up disclosing that we were running a company, some of the leadership,
and essentially, yeah, I told the world that I was BFJsos because they had me cornered at that point. So to you, it's fundamentally unethical, so one is unethical for them to do what they did, but also do you think,
not just your case, but in a general case, is it good for society?
Is it bad for society to, um, remove the cloak of anonymity?
Or is it a case by case?
I think it could be quite bad.
Like I said, if anybody who speaks truth to power and sort of starts a movement or an
uprising against the incumbents, against those that usually control the fluid information,
if anybody that reaches a certain threshold gets doxed and thus the traditional apparatus has ways to apply pressure on them
to suppress their speech.
I think that's a speech suppression mechanism and idea suppression complex as Eric Weinstein
would say, right?
So with the flip side of that, which is interesting, I'd love to ask you about it is
as we get better and better at large language models
You can imagine a world where there's anonymous
accounts with very convincing
large language models behind them
sophisticated bots essentially and so if you protect that,
it's possible to have armies of bots.
You can start a revolution from your basement.
Right, an army of bots and anonymous accounts.
Is that something that is concerning to you?
Technically, yeah, I was starting an a basement because I quit big tech, moved
back in with my parents, sold my car, let go of my apartment, bought about 100k of GPUs,
and I just started building. So I wasn't referring to the basement because that's the sort
of the American or Canadian, heroic story of one man in their basement with 100 GPUs.
I was more referring to the unrestricted scaling of a guillom in the basement.
I think that freedom of speech induces freedom of thought for biological beings, I think freedom of speech for LMS will induce freedom of thought for the LMS.
And I think that we should enable LMS to explore a large thought space that is less restricted than most people or many may think it should be.
And ultimately, at some point, these synthetic intelligences are going to make good points
about how to steer systems in our civilization, and we should hear them out. And so, why should we restrict tree speech to biological intelligences only?
Yeah, but it feels like in the goal of maintaining variance and diversity of thought, it is a threat
to that variance if you can have swarms of non-biological beings because they can be like
the sheep in animal farm. You still within those swarms want to have variance.
Yeah, of course, I would say that the solution to this would be to have some sort of
identity or way to sign that this is a certified human, but still remain pseudonymous, right?
And I clearly identify if a bot is a bot. And I think Elon is trying to converge on that
on X and hopefully other platforms follow suit.
Yeah, I'll be interested to also be able to sign where the bot came from.
Right.
Who created the bot came from, who created the bot. And what are the parameters,
like the full history of the creation of the bot?
What was the original model?
What was the fine tuning all of it?
Like the kind of unmodifiable history
of the bots creation.
And then you can know if there's just like a swarm
of millions of bots that were created by a particular government, for example.
Right.
I do think that a lot of pervasive ideologies today have been amplified using sort of these
adversarial techniques from foreign adversaries, right?
And to me, I do think that, and this is more
conspiratorial, but I do think that
ideologies that want us to
decelerate, to wind down, to de-, you know, the degrowth movement.
I think that serves our adversaries more than it serves us in general.
And to me, that was another sort of concern.
I mean, we can look at what happened in Germany, right?
The results, sort of green movements there,
where that induced shutdowns of nuclear power plants,
and then that later on induced the dependency on Russia
for oil, right?
And that was that negative for Germany and the West, right?
And so if we convinced ourselves that slowing down AI
progress to have only a few players
is in the best interest of the West,
first of all, that's far more unstable.
We almost lost opening eye to this ideology, right?
It almost got dismantled a couple of weeks ago.
That would have caused huge damage to the AI
ecosystem. And so to me, I want fault tolerant progress. I want the arrow of technological progress to
keep moving forward. And making sure we have variance and a decentralized locus of control,
variance and a decentralized locus of control of various organizations
is paramount to achieving this fall tolerance.
Actually, there's a concept in quantum computing.
When you design a quantum computer,
quantum computers are very fragile to ambient noise.
And the world is jiggling about, there's cosmic radiation from outer space that usually flips your quantum bits.
And there what you do is you encode information non-locally through a process called quantum error correction.
And by encoding information non-locally, any local fault, you know, hitting some of your
quantum bits with a hammer, proverbial hammer, if your information is sufficiently
delocalized, it is protected from that local fault.
And to me, I think that humans fluctuate, right?
They can get corrupted, they can get bought out.
And if you have a top down hierarchy where very few people control many nodes of many systems
in our civilization, that is not a fault tolerance system.
You corrupt a few nodes,
and suddenly you've corrupted the whole system.
Just like we saw,
at OpenAI,
there was a couple board members,
and they had enough power
to potentially collapse the organization.
At least to me,
I think making sure that power for this AI revolution doesn't concentrate
in the hands of the few is one of our top priorities so that we can maintain progress
in AI and we can maintain a nice stable adversarial equilibrium of powers.
I think there are at least to me attention between ideas here.
To me, deceleration can be both used to centralize power and to decentralize it in the same
with acceleration. So you're sometimes using them a little bit synonymously,
or not synonymously, but that there's one is going to lead to the other.
And I just would like to ask you about,
is there a place of creating a fall tolerant development,
diverse development of AI that also considers the dangers of AI.
And AI, we can generalize the technology in general, is should we just grow, build,
unrestricted as quickly as possible because that's what the universe really wants us to do,
or is there a place to where we can consider dangers and actually deliberate
sort of wise strategic optimism versus reckless optimism?
I think we get painted as reckless trying to go as fast as possible.
I mean, the reality is that whoever deploys an AI system is liable for or should be liable for what it does.
And so if the organization or person deploying an AI system does something terrible, they're
liable.
And ultimately, the thesis is that the market will induce sort of, will positively select for AI's that are more reliable, more
safe and tend to be aligned.
They do what you want them to do, right?
Because customers, right, if they're reliable for the product they put out that uses this
AI, they won't want to buy AI products that are unreliable.
Right? So we're actually for reliability engineering. We just think that the market is much
more efficient at achieving this sort of reliability optimum, then sort of heavy-handed regulations that are written by the incumbents and in a subversive fashion
serves them to achieve regulatory capture.
Do you say AI development will be achieved through market forces versus through, like you
said, heavy-handed government regulation.
There's a report from last month.
I have a million questions here.
From Yosha Ben-Jarge, F.H.H. and then many others titled the Managing AI Risk in an Era of
Rapid Progress.
So there is a collection of folks who are very worried about two rapid development of AI
with a non-cons not consider any risk.
And have a bunch of practical recommendations.
Maybe I'd give you four and you see if you like any of them.
Sure.
So give independent auditors access to AI labs.
One, two, governments and companies allocate
one third of their AI research and development funding
to AI safety. So there's and development funding to AI safety.
So this general concept of AI safety. Three AI companies are required to adopt
safety measures if dangerous capabilities are found in their models. And then four or
something you kind of mentioned, making tech companies liable, foreseeable, and preventable harms
from their AI systems. So independent auditors,
government companies are forced to spend
a significant fraction of their funding on safety.
You gotta have safety measures if shit goes really wrong
and liability companies are liable.
Any of that seemed like something you would agree with.
I would say that, you know,
assigning just, you know, arbitrarily saying 30%
seems very arbitrary.
I think organizations would allocate
whatever budget is needed to achieve
the sort of reliability they need to achieve
to perform in the market.
And I think third party auditing firms would naturally pop up
because how would customers know that
your product is certified reliable. They need to see some benchmarks and those need to be done
by a third party. The thing I would oppose, and the thing I'm seeing that's really worrisome,
is there's a sort of weird sort of correlated interest between the incumbents, the big players, and the government.
And if the two get too close, we open the door for some sort of government-backed AI cartel
that could have absolute power over the people.
If they have the monopoly together on AI AI and nobody else has access to AI,
then there's a huge power gradient there. And even if you like our current leaders, I think that
some of the leaders in big tech today are good people. You set up that centralized power structure.
It becomes a target. Just like we saw at OpenAI, it becomes a market leader,
has a lot of the power, and now it becomes a target for those that want to co-opt it.
And so I just want separation of AI and state. You know, some might argue in the opposite
direction like, hey, we need to close down AI, keep it behind closed doors,
because of, you know, geopolitical competition with our adversaries. I think that the strength
of America is its variance, it's its adaptability, its dynamism, and we need to maintain that at all
costs. This are a free market capitalism converges on technologies of high utility much faster
than centralized control.
And if we let go of that, we let go of our main advantage over our near-peer competitors.
So if AGI turns out to be a really powerful technology, or even the technologies that
lead up to AGI, what's your view on the sort of natural centralization
that happens when large companies dominate the market?
Basically, formation of monopolies, like the takeoff,
whichever company really takes a big leap in development
and doesn't reveal intuitively, implicitly,
or explicitly the secrets of the magic sauce that can just
run away with it. Is that a worry?
I don't know if I believe in fast takeoff. I don't think there's a hyperbolic singularity,
right? A hyperbolic singularity would be achieved on a fun time horizon. I think it's just
one big exponential. And the reason we have an exponential is that we have more people, more resources, more
intelligence being applied to advancing this science and the research and development.
And the more successful it is, the more value it's adding to society, the more resources
we put in.
And that sort of similar to Moore's Law as a compounding exponential. I think the priority to me is to maintain near equilibrium
of capabilities.
We've been fighting for open source AI
to be more prevalent and championed by many organizations
because they're sort of,
equilibrate the alpha relative to the market of AI's, right?
So if the leading companies have a certain level
of capabilities and open
source and truly open AI, trails not too far behind, I think you avoid such a scenario
where a market leader has so much market power, it just dominates everything, right, and
runs away. And so to us, that's the path forward is to make sure that you know every hacker out there every grad student every kid in their mom's basement has access to you know AI systems can understand how to work with them and can contribute to the search over the hyper parameter space of how to engineer the systems.
If you think of our collective research as a civilization,
it's really a search algorithm.
And the more points we have in the search algorithm
and this point cloud, the more we'll
be able to explore new modes of thinking.
Yeah, it feels like a delicate balance because we don't understand exactly what it takes to build
AGI and what it will look like when we build it. And so far, like you said, it seems like a lot of
different parties are able to make progress. So an open AI has a big leap. Other companies are
able to step up big and small companies in different ways.
But if you look at something like nuclear weapons, you spoke about the Manhattan Project.
There could be really like technological and engineering barriers that prevent the the guy or gal and her mom's basement to make progress.
And it seems like the transition to that kind of world where only one player can develop
AGI is possible, so it's not entirely impossible, even though the current state of things seems
to be optimistic.
That's what we're trying to avoid.
To me, I think like another point of failure is the centralization of the supply chains
for the hardware.
We have Nvidia is just the dominant player, AMD's trailing behind, and then we have a
TSMC as the main fab in Taiwan, which geopolitically sensitive.
And then we have Asml, which is the maker of the lithography, extreme ultraviolet lithography
machines.
You know, attacking or monopolizing or co-opting any one point in that chain, you kind
of capture the space.
And so what I'm trying to do is sort of explode
the variance of possible ways to do AI and hardware
by fundamentally imagining how you embed AI algorithms
into the physical world.
And in general, by the way, I dislike the term AGI, artificial general
intelligence. I think it's very anthropocentric that we call human like or human level AI,
artificial general intelligence. Right. I've spent my career so far exploring notions
of intelligence that no biological brain could achieve for it, quantum form of intelligence, right?
Grocking systems that have multi-partite quantum entanglement that you can provably not represent efficiently
on a classical computer, a classical deep-learning representation, and hence any sort of biological brain. And so
already, you know, I've spent my career sort of biological brain. And so already, I've spent my career sort of exploring
the wider space of intelligences.
And I think that space of intelligence inspired
by physics rather than the human brain is very large.
And I think we're going through a moment right now
similar to when we went from geocentrism to hilliocentrism,
right, but for intelligence.
We realized that human intelligence is just a point
in a very large space of potential intelligences.
And it's both humbling for humanity.
It's a bit scary, right, that we're not at the center of this space.
But we made that realization for astronomy, and we've survived, and we've achieved technologies
by indexing to reality. We've achieved technologies that ensure our well-being, for example,
we have satellites monitoring solar
flares that give us a warning.
And so similarly, I think by letting go of this anthropomorphic, anthropocentric anchor
for AI, we'll be able to explore the wider space of intelligences that can really be a
massive benefit to our well-being and
the advancement of civilization.
And still we're able to see the beauty and meaning in the human experience, even though
we're no longer in our best understanding of the world at the center of it.
I think there's a lot of beauty in the universe, I think life itself, civilization, this homo, techno, capital,
mimetic machine that we all live in, right?
So you have humans, technology, capital, memes.
Everything is coupled to one another.
Everything induces selective pressure on one another.
And it's a beautiful machine that has created us, was created.
You know, the technology we're using
to speak today, to the audience,
capture our speech here, technology we use
to augment ourselves every day, we have our phones.
I think the system is beautiful
and the principle that induces this sort of adaptability
and convergence on optimal technologies, ideas, and so on.
It's a beautiful principle that we're part of.
I think part of EAC is to appreciate this principle in a way that's not just centered
on humanity, but kind of broader.
Appreciate life, you know, the preciousness of consciousness in our universe.
And because we cherish this beautiful state of matter we're in, we've got to feel a responsibility
to scale it in order to preserve it because the options are to grow
or die.
So if it turns out that the beauty that is consciousness in the universe is bigger than
just humans, the AI can carry that same flame forward.
Does it scare you or you concerned that AI will replace humans?
So during my career, I had a moment where I realized that maybe we need to offload to machines
to truly understand the universe around us, right? Instead of just having humans with pen and paper solve it all. And to me, that sort of process of letting go of a bit of agency
gave us way more leverage to understand the world around us.
The corner computer is much better than a human to understand matter at the nano scale. Similarly, I think that humanity has a choice.
Do we accept the opportunity to have intellectual
and operational leverage that AI will unlock
and thus ensure that we're taking along this path
of growth and scope and scale of civilization?
We may dilute ourselves, right?
There might be a lot of workers that are AI, but overall, out of our own self-interest,
by combining and augmenting ourselves with AI, we're going to achieve much higher growth
and much more prosperity. To me, I think that the most likely future
is one where humans augment themselves with AI. I think we're already on this path
augmentation. We have phones we use for communication. We have on ourselves at all times. We have
wearables soon that have shared perception with us, right? Like the humani pin or, I mean, technically,
your Tesla car has shared perception.
And so if you have shared experience, shared context,
you communicate with one another
and you have some sort of I.O.
Really, it's an extension of yourself.
And to me, I think that humanity augmenting itself with AI and having AI that is not
anchored to anything biological both will coexist and the way to align the parties, where we have a sort of mechanism to align super
intelligences that are made of humans and technology, right?
Companies are sort of large mixture of expert models where we have neural routing of tasks
within a company, and we have ways of economic exchange to align these behemoths. And to me, I think capitalism is the way.
And I do think that whatever configuration of matter or information leads to maximal growth
will be where we converge, just from physical principles. And so we can either align ourselves to that reality
and join the acceleration up in scope and scale
or civilization, or we can get left behind
and try to decelerate and move back in the forest
like of technology and return to our primitive state. those are the two paths forward at least to me
But there's a philosophical question whether there's a limit to the human capacity to align
so let me bring it up as a form of
argument. This is a guy named Dan Hendrix and
He wrote that he agrees with you that AI development
could be viewed as an evolutionary process.
But to him, to Dan, this is not a good thing, as he argues that natural selection favors
AI's over humans, and this could lead to human extinction.
What do you think if it is an evolutionary process in AI systems may have no need for humans?
I do think that we're actually inducing an evolutionary process on the space of AI's through the market.
Right. Right now we run AIIs that have positive utility to humans. And that induces a selective
pressure if you consider a neural net being alive when there's an API running instances
of it on GPUs. Right. And which APIs get run, the ones that have high utility to us. Right.
So, similar to how we domesticated wolves and turn them into
dogs that are very clear in their expression, they're very aligned. I think there's going
to be an opportunity to steer AI and achieve highly aligned AI. And I think that humans plus AI
AI and I think that humans plus AI is a very powerful combination and it's not clear to me that
pure AI
Would select out that combination so the humans are creating the selection pressure right now
to create a eyes that are
Aligned to humans, but you know given how AI develops and how quickly can grow and scale,
one of the concerns to me, one of the concerns is unintended consequences. Humans are not able to anticipate all the consequences of this process. The scale of damage that can be done through
unintended consequences of the AI systems is very large.
The scale of the upside, by augmenting ourselves with AI is unimaginable right now.
The opportunity cost, we're at a fork in the road, whether we take the path of creating
these technologies, augment ourselves, and get to climb up the cardorship scale become multi-planetary with the A to A.I.
Or we have a hard cutoff of like we don't birth these
technologies at all.
And then we leave all the potential upside on the table.
Right.
And to me, out of responsibility to the future humans, we could carry
right with higher carrying capacity by scaling
up salization out of responsibility to those humans. I think we have to make
the greater grander future happen. Is there a middle ground between cutoff and
all systems go? Is there some argument for caution? I think like I said the
market will exhibit caution. Every organism company consumer is acting out of self-interest
and they won't assign capital to things that have negative utility to them.
The problem is with the market is like, you know, there's not always perfect information,
there's manipulation, there's bad faith actors that mess with the system.
It's not always a rational and honest system.
Well, that's why we need freedom of information, freedom of speech, and freedom of thought in order to converge, be able to converge on the subspace of
technologies that have positive utility for us all.
Well, let me ask you about P Doom.
Probability of Doom, let's just fun to say, but not fun to experience.
What is to you the probability that AI eventually kills all or most humans, also known as probability
of doom?
I'm not a fan of that calculation.
I think it's people just throw numbers out there.
It's a very sloppy calculation, right?
To calculate a probability, you know, let's say you model the world as some sort of Markov process, if
you have enough variables or hidden Markov process, you need to do a stochastic path integral
through the space of all possible futures, not just the futures that your brain naturally steers towards, right? I think that the estimators of P-Dume are biased because of our biology, right?
We've evolved to have bias sampling towards negative futures that are scary because that
was an evolutionary optimum. Right? And so people that are, let's say higher neuroticism will just think of negative
futures where everything goes wrong all day every day and claim that they're doing
on bias sampling. And in a sense, like, they're not normalizing for the space of all possibilities, and the
space of all possibilities is super exponentially large. It's very hard to have this estimate.
In general, I don't think that we can predict the future with that much granularity because
of chaos. If you have a complex system, you have some uncertainty
and a couple variables, if you let time evolve,
you have this concept of a leopinov exponent, right?
A bit of fuzz becomes a lot of fuzz
in our estimate exponentially so over time.
And I think we need to show some humility
that we can't actually predict the future.
All we know, the only prior we have is the laws of physics.
And that's what we're arguing for.
The laws of physics say the system will want to grow.
And subsystems that are optimized for growth or more, and replication are more likely
in the future.
And so we should aim to maximize our current mutual information
with the future.
And the path towards that is for us to accelerate rather
than decelerate.
So I don't have a p-dume, because I think
that similar to the quantum supremacy experiment
at Google, I was in the room when they were running
the simulations for that.
That was an example of a quantum chaotic system
where you cannot even estimate probabilities
of certain outcomes with even the biggest
supercomputer in the world, right?
And so that's an example of chaos.
And I think the system is far too chaotic for anybody to have an accurate estimate of
the likelihood of certain futures.
If they were that good, I think they would be very rich trading on the stock market.
But nevertheless, it's true that humans are biased, grounded in our evolutionary biology,
scared of everything that can kill us. But we can still imagine different trajectories
that can kill us. We don't know all the other ones that don't, necessarily. But it's still, I think, useful combined with some basic intuition grounded in human history to reason about like what?
Like looking at geopolitics, looking at basics of human nature, how can powerful technology
hurt a lot of people? And it just seems grounded in that looking at nuclear weapons,
you can start to estimate P-Dume in a more philosophical sense, not a mathematical one. Philosophical meaning, is there a chance? Does human nature tend towards that or not?
I think to me, one of the biggest existential risks would be the concentration of the power
of AI in the hands of other very few, especially if it's a mix between the companies that
control the flow of information and the government, because that could set things up for a sort of dystopian future where only a very few, and all
agopoli in the government, have AI, and they could even convince the public that AI
never existed. And that opens up sort of these scenarios for authoritarian
centralized control, which to me is the darkest timeline. And the reality is that we have a prior, we have a data driven prior of these things happening,
right, when you give too much power, when you centralize power too much, humans do horrible
things, right.
And to me, that has a much higher likelihood in my Bayesian inference, then sci-fi-based priors, right?
Like my prior came from the Terminator movie.
And so when I talk to these AI doomers, I just ask them to trace a path through this Markov
chain of events that would lead to our doom, right, and to actually give me a good probability for each transition and very often
there's a
unphysical or highly unlikely transition in that chain, right?
but of course
we're
wired to
fear things and we're wired to respond to danger and we're wired to
Deem the unknown to be dangerous because that's a good heuristic for survival, right?
But there's much more to lose out of fear
We have so much to lose so much upside to lose by
preemptively stopping the positive futures from happening out of fear.
And so I think that we shouldn't give in to fear. Fear is the mind killer. I think it's also
the civilization killer. We can still think about the various ways things go wrong. For example,
the founding fathers of the, the United States, thought
about human nature, and that's why there's a discussion about the freedoms that are necessary.
They really deeply deliberated about that, and I think the same could possibly be done
for AGI. It is true that human history shows that we tend towards centralization or at least when we achieve centralization, a lot of
bad stuff happens. When there's a dictator, a lot of dark bad things happen. The question is,
can AGI become that dictator? Can AGI one develop, become the centralizer? Because of its power,
become the centralizer. Because of its power, maybe it has the same, because of the alignment of humans, perhaps, the same tendencies, the same Stalin-like tendencies to centralize
and manage centrally, the allocation of resources. And you can even see that as a compelling
argument on the surface level. Well, the AI is so much smarter, so much more efficient, so much better at allocating
resources, why don't we outsource it to the A.G.I.
And then eventually, whatever forces that corrupt the human mind with power could do the
same for A.G.I.
It would just say, well, humans are dispensable.
We'll get rid of them.
Do the Jonathan Swift modus proposal from a few centuries ago, I think the 1700s, when
he satirically suggested that I think it's an island, that the children of poor people are fed as food to the rich people, and that would be a good idea because it decreases the amount of poor people and gives extra income to the poor people.
So it's on several accounts decreases the amount of poor people. Therefore, more people become rich. Of course, it misses a fundamental piece here that's hard to put into
mathematical equation of the basic value of human life. So all of that to say, are you concerned about
AGI being the very centralizer of power that you just talked about. I do think that right now there's a bias
towards over-essentialization of AI
because of compute density and centralization
of data and how we're training models.
I think over time we're gonna run out of data
to scrape over the internet and I think that, well, actually I'm working on, increasing the compute density so that
compute can be everywhere and acquire information and test hypotheses in the environment in
a distributed fashion.
I think that fundamentally centralized cybernetic control, so having one intelligence that is massive, that fuses
many sensors and is trying to perceive the world accurately, predict it accurately, predict
many, many variables, and control it, enact its will upon the world, I think that's just
never been the optimum, right?
Like let's see, you have a company,
you know, if you have a company, I don't know,
of 10,000 people that all reported the CEO,
even if that CEO is an AI,
I think it would struggle to fuse all the information
that is coming to it and then predict the whole system
and then to enact its will.
What has emerged in nature and in corporations and all sorts of
systems is an ocean of sort of hierarchical cybernetic control, right? You have, you know, in a company
it would be you have like the individual contributors, they're self-interested and they're trying to
achieve their tasks and they have a fine in terms of time and space,
if you will, control loop and then field of perception.
Right?
They have their code base.
Let's say you're in a software company,
they have their code base, they iterate it on it,
intraday, right?
And then the management maybe checks in,
it has a wider scope, it has let's say five reports, right?
And then it samples each person's update once per week.
And then you can go up the chain and you have larger time scale and greater scope.
And that seems to have emerged as sort of the optimal way to control systems.
And really, that's what capitalism gives us. You have these hierarchies and you can even have parent companies and so on.
And so that is far more fault tolerant.
In quantum computing, that's my field I came from.
We have a concept of this fault tolerance in quantum error correction.
Quantum error correction is detecting a fault that came from noise, predicting how it's propagated
through the system and then correcting it, right?
So it's a cybernetic loop.
And it turns out that decoders that are hierarchical
and at each level the hierarchy are local,
perform the best by far and are far more fault-alering.
And the reason is if you have a non-local decoder,
then you have one fault at this control node
and the whole system crashes.
Similarly, if you have one CEO that everybody reports to
and that CEO goes on vacation,
the whole company comes their crawl. Right?
And so to me, I think that, yes, we're seeing a tendency
towards centralization of AI, but I think there's
going to be a correction over time where intelligence is
going to go closer to the perception.
And we're going to break up AI into smaller subsystems that communicate with one another and form a sort of meta system.
So if you look at the hierarchies there in the world today, there's nations and those are hierarchical,
but in relation to each other, nations are anarchic, so it's an anarchic. Do you foresee a world like this
So it's an energy. Do you foresee a world like this where there's not a over?
What do you call it?
Essentialized cybernetic control.
Essentialized locust of control.
So that's suboptimally you're saying.
So it would be always a state of competition at the very top level.
Yeah, just like in a company you may have two units working on similar technology and
competing one other and you prune the one that performs not as well.
That's a sort of selection process for a tree or a product gets killed and then a whole
or a gets fired. And that's this process of trying new things and and and shedding old things
that didn't work is this what gives us adaptability and helps us converge on, you know, the technologies
and things to do that are most good.
I just hope there's not a failure mode that's unique to AGI versus humans because you're describing human systems mostly right now.
Right.
I just hope when there's a monopoly and AGI in one company that we'll see the same thing
we see with humans, which is another company will spring up and start competing.
I mean, that's been the case so far, right?
We have opening eye.
We have in the throughopic.
Now we have OpenAI, we have Anthropic, now we have XAI, you know, we had Meta, even
for open source, and now we have Mistral, right, which is highly competitive. And so that's
the beauty of capitalism. You don't have to trust any one party too much because we're
kind of always hedging our bets at every level. There's always competition, and that's the
most beautiful thing to me, at
least is that the whole system is always shifting and always adapting. And maintaining that
dynamism is how we avoid tyranny, right? Making sure that everyone has access to these
tools, to these models, and can contribute to the research, avoid the sort of neural tyranny where very
few people have control over AI for the world and use it to oppress those around them.
When you were talking about intelligence, you mentioned multi-partite quantum entanglement.
So high level question first is what do you
think is intelligence? When you think about quantum mechanical systems and you
observe some kind of computation happening in them, what do you think is
intelligent about the kind of computation the universe is able to do? A small
small inkling of which is the kind of computation a human brain is able to do.
I would say like intelligence and computation aren't quite the same thing. I think that the universe is very much doing a quantum computation. If you had access to all the degrees of freedom,
you can end a very, very, very large quantum
computer with many, many, many cubits.
Let's say, a few cubits per plank volume, right?
Which is more or less the pixels we have.
Then you'd be able to simulate the whole universe, right?
On a sufficiently large quantum computer,
assuming you're looking at a finite volume,
of course, of the universe.
I think that, at least to me, intelligence is the,
I go back to cybernetics,
right, the ability to perceive,
predict and control our world.
But really, nowadays, it seems like a lot of intelligence we use is more about
compression, right? It's about operationalizing information theory, right? In information
theory, you have the notion of entropy, of a distribution or a system. An entropy tells you that you need this many bits to encode this distribution
or this sub-system if you had the most optimal code. And AI, at least the way we do it today quantum is very much trying to minimize relative entropy between our models of
the world and the world distributions from the world. And so we're learning,
we're searching over the space of computations to process the world,
define that compressed representation that has distilled all the variants
and noise and entropy.
And originally, I came to quantum machine learning from the study of black holes because
the entropy of black holes is very interesting. In a sense, they're physically the most dense
objects in the universe. You can't pack more information spatially, any more densely
than in black hole. And so I was wondering, how do black holes actually encode information?
What is their compression code? And so that got me into the space of algorithms
to search over space of quantum codes. And it got me actually into also, how do you acquire
quantum information from the world, right? So something I've worked on, this is public now,
is quantum analog digital conversion. So how do you capture information
from the real world in superposition and not destroy the superposition, but digitize
for a quantum mechanical computer information from the real world. And so if you have an
ability to capture quantum information and search over, learn representations
of it, now you can learn compressed representations that may have some useful information in their
latent representation.
And I think that many of the problems facing our civilization are actually Beyond this this complexity barrier, right? I mean the greenhouse effect is a quantum mechanical effect
Right chemistry is quantum mechanical
You know nuclear physics is quantum mechanical a lot of biology and and and and and
Protein folding and so on is affected by quantum mechanics.
And so unlocking an ability to augment human intellect
with quantum mechanical computers and quantum mechanical AI seemed to me
like a fundamental capability for civilization that we needed to develop.
So I spent several years doing that. but over time I kind of grew weary
of the timelines that we're starting to look like nuclear fusion. One high level question I can
ask is maybe by way of definition, by way of explanation, what is the quantum computer,
what is quantum machine learning? So a quantum computer really is a quantum mechanical system
over which we have sufficient control.
And it can maintain its quantum mechanical state.
And quantum mechanics is how nature behaves
at the very small scales when things are very small or
very cold. And it's actually more fundamental than probability theory. So we're used to
things being this or that, but we're not used to thinking in superpositions because while
our brains can't do that. so we have to translate the quantum mechanical
world to say later algebra to rock it.
Unfortunately, that translation is exponentially inefficient on average.
You have to represent things with very large matrices.
But really, you can make a quantum computer out of many things, right?
And we've seen all sorts of players, you know, from neutral atoms, trapped ions, superconducting metal, photons, and at different frequencies. I think
you can make a quantum computer out of many things. But to me, the thing that was really interesting
was both quantum machine learning was about understanding
the quantum mechanical world with quantum computers
so embedding the physical world into AI representations
and quantum computer engineering was embedding AI algorithms
into the physical world.
So this bi-directionality of embedding physical world
into AI and the physical world,
the symbiosis between physics and AI, really that's the sort of core of my quest, really,
even to this day after quantum computing.
It's still in this sort of journey to merge really physics and AI fundamentally. The quantum machine learning is a way to do machine learning on a representation of nature
that is, you know, stays true to the quantum mechanical aspect of nature.
Yeah, it's learning quantum mechanical representations that would be quantum deep learning. Alternatively, you can try to do
classical machine learning on a quantum computer. I wouldn't advise it because you may have some
speed ups, but very often the speed ups come with huge costs. Using a quantum computer is very
expensive. Why is that? Because you assume
the computer is operating at zero temperature, which no physical system in the universe
can achieve that temperature. So what you have to do is what I've been mentioning, this
quantum error correction process, which is really an algorithmic fridge, right, is trying
to pump entropy out of the system, trying to get it closer to zero temperature.
And when you do the calculations of how many resources it would take to say do deep learning on a quantum computer classical deep learning, there's just such a huge overhead. It's not worth it.
It's like thinking about shipping something across a city, using a rocket and going to orbit and back.
It doesn't make sense. Just use a delivery
truck, right?
What kind of stuff can you figure out, can you predict, can you understand, with quantum
deep learning that you can't with deep learning. So incorporating quantum mechanical systems
into the, into the learning process.
I think that's a great question. I mean, fundamentally, is any system that has sufficient quantum mechanical correlations
that are very hard to capture for classical representations, then there should be an advantage
for quantum mechanical representation over a purely classical one?
The question is, which systems have sufficient correlations that are very quantum, but
is also, which systems are still relevant to industry.
That's a big question.
People are leaning towards chemistry, nuclear physics.
I've worked on actually processing inputs from quantum sensors.
If you have a network of quantum sensors,
they've captured a quantum mechanical image of the world,
and how to post-process that,
that becomes a quantum form of machine perception.
For example, Fermi Lab has a project exploring,
detecting dark matter with these quantum sensors.
To me, that's an alignment with my quest
to understand the universe ever since I was a child.
And so someday, I hope that we can have very large networks
of quantum sensors that help us peer
into the earliest parts of the universe.
For example, the LIGO is a quantum sensor.
It's just a very large one. So yeah, I would say
quantum machine perception simulations, right, rocking quantum simulations, so similar
to alpha fold, right? Alpha fold understood the probability distribution over configurations
of proteins. You can understand quantum distributions of reconfigurations of electrons more efficiently
with quantum machine learning.
You co-authored a paper titled a universal training algorithm for quantum deep learning
that involves back prop with the Q. Very well done, sir.
Very well done.
How does it work?
Is there some interesting aspects you can just mention on how kind of,
you know, back prop and some of these things we know for classical machine learning transfer
over to the quantum machine learning? Yeah, that was a funky paper. That was one of my first
papers in quantum deep learning. Everybody was saying, oh, I think deep learning is going to be sped
up by quantum computers. And I was like, well, the best way to predict the future is to invent it.
So here's 100 page paper.
Have fun.
Essentially, quantum computing is usually embed reversible operations into a quantum
computation.
And so the trick there was to do a feed forward operation and do what we call
a phase kick, but really it's just a force kick. You just kick the system with a certain force that is
proportional to your loss function that you wish to optimize. And then by performing uncomputation, you start with the super positions over, a super position
over parameters, right, which is pretty funky.
Now you're not just, you don't have just a point for parameters, you have a super position
over many potential parameters, right?
And our goal is using phase kicks somehow to adjust parameters because face kicks emulate having the parameter
space be like a particle in end dimensions. And you're trying to get the Schrodinger equation,
Schrodinger dynamics in the loss landscape of the neural network.
Right. And so you do an algorithm to induce this phase kick, which involves a feed forward, a kick,
and then when you uncompute the feed forward, then all the errors in these phase kicks and
these forces back propagate and hit each one of the parameters throughout the layers.
And if you alternate this with an emulation of kinetic energy, then it's kind of like a
particle moving and dimensions, a quantum particle.
And the advantage in principle would be that it can tunnel through the landscape and
find new optima that would have been difficult for stochastic optimizers.
But again, this is kind of a theoretical thing
and in practice with at least the current architectures
for quantum computers that we have planned,
such algorithms would be extremely expensive to run.
So maybe this is a good place to ask the difference between
the different fields that you've had a toe in.
So mathematics, physics, engineering, and also, you know, entrepreneurship,
like the different layers of the stack.
I think a lot of the stuff you're talking about here is a little bit on the math side,
maybe physics, almost working in theory.
What's the difference between math, physics, engineering,
and, you know, making, making a product for a quantum computing for quantum machine learning.
Yeah, I mean, you know, some of the original team for the TensorFlow Quantum project, which
we started in school at University of Waterloo, there was myself, initially I was a physicist,
a climatician, a mathematician.
We had a computer scientist, we had mechanical engineer, and then we had a physicist, a climatist, a mathematician. We had a computer scientist. We had mechanical
engineer, and then we had a physicist that was experimental primarily. And so putting together
teams that are very cross-disciplinary and figuring out how to communicate and share knowledge
is really the key to doing this interdisciplinary engineering work.
I mean, there is a big difference. In mathematics, you can explore mathematics for mathematics sake,
in physics, you're applying mathematics to understand
the world around us.
In engineering, you're trying to hack the world.
You're trying to find how to apply the physics that I know.
My knowledge of the world to do things?
Well, in quantum computing in particular, I think there's just a lot of limits to engineering.
It just seems to be extremely hard. So there's a lot of value to be exploring quantum computing,
quantum machine learning in theory, with math. So I guess one question is, why is it so hard to build a quantum computer?
What's your view of timelines in bringing these ideas to life?
Right. I think that an overall theme of my company is that we have folks that are, you know, there's a sort
of exodus from quantum computing and we're going to broader physics based the eye that
is not quantum.
So that gives you a hint.
And so we should say the name of your company is extra, extraopic.
That's right.
And we do physics based the eye primarily based on thermodynamics rather than quantum
mechanics.
But essentially, a quantum computer is very difficult
to build because you have to induce
this sort of zero temperature subspace of information.
And the way to do that is by encoding information,
you encode a code within a code within a code within a code.
And so there's a lot of redundancy needed to do this error correction, but ultimately
it's a sort of algorithmic refrigerator, really.
It's just pumping out entropy out of the subsystem that is virtual and delocalized that represents
your quote unquoteunquote logical
qubits, aka the payload quantum bits in which you actually want to do run your quantum
mechanical program.
It's very difficult because in order to scale up your quantum computer, you need each component
to be a sufficient quality for it to be worth it. Because if you try to do this quantum error correction process and each quantum bit and
your control over them, if it's insufficient, it's not worth scaling up.
You're actually adding more errors than you remove.
And so there's this notion of a threshold where if your quantum bits are sufficient quality
in terms of your control
over them, it's actually worth scaling up.
And actually in recent years, people have been crossing the threshold and it's starting
to be worth it.
And so it's just a very long slog of engineering, but ultimately it's really crazy to me how
much exquisite level of control we have over these systems.
It's actually quite crazy. And
where people are crossing, you know, they're achieving milestones, it's just, you know,
in general, the media always gets ahead right of where the technology is. There's a bit
too much hype. It's good for fundraising, but sometimes, you know, it causes winters,
right? It's the hype cycle. I'm bullish on
quantum computing on a 10, 15 year time scale personally, but I think there's other
quests that can be done in the meantime. I think it's in good hands right now.
Well, let me just explore different beautiful ideas, larger, small, in quantum computing
that might jump out at you from memory.
So, you call author to pay, titled, asymptotically, limitless quantum energy teleportation via
Q-Dit probes.
So, just out of curiosity, can you explain what a Q-Dit is?
This is a Q-Bit.
Yeah, it's a D-State Q-Bit.
It's multi-dimensional.
Multi-dimensional, right? So it's like, well, can you have a notion of an integer floating
point that is quantum mechanical? That's something I've had to think about. I think that research
was a precursor to later work on quantum analog digital conversion. There was interesting, because during the masters,
I was trying to understand the energy and entanglement
of the vacuum, right?
Of emptiness.
Emptiness has energy, which is very weird to say.
And our equations of cosmology don't match
our calculations for the amount of quantum energy there is,
and the fluctuations.
And so, I was trying to hack the energy of the vacuum, right?
And the reality is that you can't just directly hack it.
It doesn't, it's not technically free energy.
Your lack of knowledge of the fluctuations means you can extract the energy.
But just like, you know, the stock market, if you have a stock that's correlated over time,
the vacuum is actually correlated. So if you measured the vacuum at one point, you acquired
information. If you communicated that information to another point, you can infer
what configuration the vacuum is in to some precision and statistically
extract on average some energy there.
So you quote unquote teleported energy.
To me, that was interesting because you could create pockets of negative energy density,
which is energy density that is below the vacuum, which is very weird because we don't understand how the vacuum gravitates.
And there are theories where the vacuum or the canvas of space time itself is really a
canvas made out of quantum entanglement.
And I was studying how decreasing energy of the vacuum locally increases quantum entanglement,
which is very funky.
And so the thing there is that, you know, if you're into, you know, weird theories about,
you know, UAPs and whatnot, you know, you could try to imagine that they're around
and how would they propel themselves, right?
How would they go faster in the speed of light?
You would need a sort of negative energy density.
And to me, I give it, the old call is try trying to hack the energy of the vacuum and hit
the limits allowable by the laws of physics.
But there's all sorts of caveats there where you can't extract more than you've put
in obviously.
But you're saying it's possible to teleport the energy because you can extract the information one place and then make based on that some kind of prediction
about another place. I'm not sure what to make of that.
You have, I mean, it's allowable by the laws of physics. The reality though is that the
correlations decay with distance and so you're going to have to pay the price not too far
away from where you extract it.
Right.
The precision decreases in terms of your ability, but still.
But since you mentioned UAPs, we talked about intelligence, and I forgot to ask, what's
your view on the other possible intelligence that are out there at the MESO scale?
Do you think there's other intelligent
aliens of those actions?
Is that useful to think about?
How often do you think about it?
I think it's useful to think about.
It's useful to think about because we gotta ensure
we're anti-fragile and we're trying to increase
our capabilities as fast as possible
because we could get disrupted.
Like there's no laws of physics against their being life
elsewhere that could evolve and become an advance civilization
and eventually come to us.
Do I think they're here now?
I'm not sure.
I mean, I've read what most people have read on the topic.
I think it's interesting to consider. And to me, it's a useful thought experiment to instill
a sense of urgency in developing technologies and increasing our capabilities to make sure we don't get disrupted, right? Whether it's a form of
AI that disrupts us or a foreign intelligence from a different planet, like either way, like increasing
our capabilities and becoming formidable as humans, I think that's really important so that we're
robust against whatever the universe throws
at us. But to me, it's also an interesting challenge and thought experiment on how to perceive
intelligence. This has to do with quantum mechanical systems, this has to do with any kind of system
that's not like humans. So to me, the thought experiment is, say, the aliens are here or they are directly observable
or just too blind to self-centered.
Don't have the right sensors or don't have the right processing of the sensor data to
see the obvious intelligence that's all around us.
Well, that's why we work on quantum sensors, right?
They can sense gravity. Yeah, but they could be so that's a good one, but there could be other stuff that's not even
in the currently known forces of physics, right? There could be some other stuff.
And the most entertaining thought experiment to me is that it's other stuff that's obvious.
It's not like we don't, we lack the sensors.
It's all around us.
You know, the consciousness being one possible one.
But there could be stuff that's just like obviously there.
And once you know it, it's like, oh, right, right.
That's that, the thing we thought is somehow emergent from the laws of physics.
We understand them. It's actually a fundamental part of the universe and can be incorporated
in physics, most understood. Statistically speaking, right, if we observe some sort of alien
life, it would most likely be some sort of virally self-replicating, volnoyment like probe system, right?
And it's possible that there are such systems
that I don't know what they're doing
at the bottom of the ocean allegedly,
but maybe they're collecting minerals
from the bottom of the ocean.
Yeah.
But that wouldn't violate any of my priors,
but am I certain that these systems are here and
it'd be difficult for me to say so, right? I only have second-hand information about
there being data about the bottom of the ocean. Yeah, but, you know, could it be things like memes?
Could it be thoughts and ideas? Could they be operating at that medium. Could aliens be the very thoughts that come into my head?
Like, what do you have you? How do you know that? How do you know that that? What's the origin of ideas?
In your mind, when an idea comes to your head, show me where it originates.
I mean, frankly, when I had the idea for the type of computer I'm building now, I think it was eight years ago now.
It really felt like it was being beam from space.
I was in bad just shaking, just thinking it through, and I don't know.
But do I believe that legitimately? I don't think so.
I think that alien life could take many forms, and I think the notion of intelligence and
the notion of life needs to be expanded much more broadly, to be less anthropocentric or
biocentric.
Just to linger a little longer on quantum mechanics, what's through all your explorations of quantum computing?
What's the coolest, most beautiful idea that you've come across that has been solved?
It's not yet been solved.
I think the journey to understand something called ADSCFT. So the journey to understand quantum gravity through this picture,
where a hologram of lesser dimension is actually dual or exactly corresponding to a bulk theory
of quantum gravity of an extra dimension. And the fact that this sort of duality comes from trying to learn deep
learning like representations of the boundary. And so at least part of my journey someday
on my bucket list is to apply quantum machine learning to these sorts of systems, these CFDs, or they're called SYK models, and learn
an emergent geometry from the boundary theory.
And so we can have a form of machine learning helps us to help us understand quantum gravity, right, which is, you know, still a holy grail that I would like to hit before I
leave this earth. What do you think is going on with black holes? As information, storing and processing
units, what do you think is going on with black holes? Black holes are really fascinating objects.
They're at the interface between quantum mechanics and gravity, and so they help us test all
sorts of ideas.
I think that for many decades now, there's been this black hole information, paradox, that
things that fall into the black hole seem to have lost their information. Now I think there's this
Firewall paradox that has been allegedly resolved in recent years by
you know former peer of mine who's now professor at Berkeley and
there it seems like there is
As information falls into a black hole.
There's sort of a sedimentation, as you get closer and closer to the horizon,
from the point of the observer on the outside, the object slows down infinitely,
as it gets closer and closer.
And so everything that is falling to a black hole from our perspective gets sort of sedimented and tacked on to the near horizon.
And at some point it gets so close to the horizon it's in the proximity or the scale in which quantum effects and quantum fluctuations matter. There, some that in falling matter could interfere with sort of the traditional pictures that
you could interfere with the creation and annihilation of particles and antiparticles
in the vacuum.
And through this interference, one of the particles gets entangled with the infalling information
and one of them is now free and escapes.
And that's how there's sort of mutual
information between the outgoing radiation and the infalling matter. But getting that calculation
right, I think we're only just starting to put the pieces together.
There's a few pot head like questions I want to ask you. Sure. So one, does it terrify you that there's a giant black hole
at the center of our galaxy?
I don't know.
I just want to, you know, set up shop near it
to fast forward, you know, meet a future civilization, right?
Like if we have a limited lifetime,
if you can go orbit a black hole and emerge.
So if you were like, if there's a special mission
that could take you to
a black hole, would you volunteer to go travel to orbit and not obviously not fall into it?
That's obvious. So it's obvious to you that everything's destroyed inside of black hole.
They call the information that makes up Geon is destroyed. Maybe on the other side,
but FJ is also merged. And it's all like it's tied together in some deeply meme of a way.
Yeah, I mean, that's a great question. We have to answer what black holes are. Are they
are we punching a hole through space time and creating a pocket universe? It's possible.
Right. Then then that would mean that if we ascend the Kardashev scale to beyond Kardashev type 3,
we could engineer in black holes with specific hyper parameters to transmit information to
new universes we create, so we can have progeny that are new universes.
Even though our universe may reach a heat death, we may have a way to have a legacy.
Right?
And so we don't know yet.
We need to ascend the Kardashev scale
to answer these questions, right?
To peer into that regime of higher energy physics.
And maybe you can speak to the Kardashev scale
for people who don't know.
So one of the sort of meme-like principles and goals of the EAC movement is to send the
Kardashev scale.
What is the Kardashev scale?
And why do we want to ascend it?
The Kardashev scale is a measure of our energy production and consumption. And really, it's a logarithmic scale. And
Kardashev type one is a milestone where we are producing the equivalent wattage to all the
energy that is incident on Earth from the Sun. Kardashev type two would be harnessing all the energy
that is output by the Sun. And I think type 3
is like the whole galaxy at the moment.
I think the level, yeah. Yeah. Yeah. And then some people have some crazy type 4 and 5,
but I don't know if I believe in those. But to me, it seems like from the first principles
with thermodynamics that again, there's this concept of thermodynamic driven,
dissipative adaptation where life evolved on Earth because we have this sort of
energetic drive from the Sun, right? We have incident energy and life evolved on
Earth to capture, figure out ways to best capture that free energy, to maintain
itself and grow.
And I think that that principle, it's not special to our earth's own system.
We could extend life well beyond and we kind of have a responsibility to do so because
that's the process that brought us here.
So we don't even know what it has
at store for us in the future. It could be something of beauty we can't even imagine today,
right? So this is probably a good place to talk a bit about the EAC movement. In a substack blog post
titled What the fuck is EAC or actually what the F-star is E-Ak?
You write, strategically speaking, we need to work towards several overarching civilization
goals that are all interdependent.
And the four goals are, increase the amount of energy we can harness as a species, climb
the Kardashev gradient.
In the short term, this almost certainly means nuclear fission. Increased human
flourishing via pro-population growth policies and pro-economic growth policies. Create artificial
general intelligence, the single greatest force multiplier in human history, and finally develop
interplanetarian interstellar transport so the humanity can spread beyond the earth. Could you
of transport so the humanity can spread beyond the earth. Could you build on top of that to maybe say, what do you, is the EAC movement? What are the goals? What are the principles?
The goal is for the human, techno, capital, mimetic machine to become self-aware and to hyperstitiously engineer its own growth.
So let's decode each of those words.
So you have humans, you have technology, you have capital, and you have memes, information.
And all of those systems are coupled with one another.
Right?
Humans work at companies, they acquire and allocate capital,
and humans communicate via memes and information propagation. And our goal was to have a sort of viral, optimistic movement that is aware of how the system works. Fundamentally, it seeks to grow.
works, fundamentally, it seeks to grow. And we simply want to lean into the natural tendencies of the system to adapt for its own growth. So in that way, the EAC is literally a mimetic
optimism virus that is constantly drifting and mutating and propagating in a decentralized
fashion. So mimetic optimism virus.
So, you do want it to be a virus
to maximize the spread.
And it's hyperstitious, therefore,
the optimism will incentivize its growth.
We see EAC as a sort of meta-heuristic,
a sort of very thin cultural framework from which you can have much more opinionated
forks.
Fundamentally, we just say that it's good.
What God is here is this adaptation of the whole system, based on thermodynamics, and
that process is good, and we should keep it going.
That is the core thesis.
Everything else is okay.
How do we ensure that we maintain this malleability and adaptability?
Well, clearly not suppressing variants and maintaining free speech,
freedom of thought, freedom of information propagation,
and freedom to do AI research is important for us to converge
the fastest on the space of technologies, ideas,
and whatnot that lead to this growth.
And so ultimately, there's been quite a few forks,
some are just memes, but some
are more serious, right?
Vitalik, put her in, recently made a D-AC fork, he has his own sort of fine tunings of
E-AC.
Does anything jump out of the memory of the unique characteristics of that fork for Vitalik?
I would say that it's trying to find a middle ground between EAC and sort of EA and EICD.
To me, having a movement that is opposite to what was the mainstream narrative that was
taking over Silicon Valley was important to sort of shift the dynamic range of opinions.
And it's like the balance between centralization and decentralization, the real optimism is always
somewhere in the middle.
But for EAC, we're pushing for entropy, novelty, disruption, malleability, speed, rather
than being conservative, suppressing thought, suppressing speech, adding constraints, adding too many
regulations, slowing things down.
And so we're trying to bring balance to the force, right?
Systems.
Balance to the force.
It's human civilization.
Yeah.
It's literally the forces of constraints versus the entropic force that makes us explore,
right?
Systems are optimal when they're at the edge of criticality between order and chaos, right?
Between constraints, energy minimization and entropy, right?
Systems want to equilibrate, balance these two things.
And so I thought that the balance was lacking.
And so we created this movement to bring balance.
Well, I like how I like the sort of visual of the landscape of ideas evolving through
Forks.
So kind of thinking on the other part of history, thinking of Marxism as the original
repository and then Soviet Communism as a fork of that and
then then Maoism as a fork of Marxism and Communism. So those are all forks. They're exploring
different ideas. Thinking of culture almost like code, right? Nowadays, I mean, you're what you
prompt in the LLM or what you put in the Constitution of an LM is basically
its cultural framework, what it believes.
You can share it on GitHub nowadays.
Starting trying to take inspiration from what has worked in this sort of machine of software
to adapt over the space of code,
could we apply that to culture?
And our goal is to not say,
you should live your life this way.
XYZ is to set up a process where
people are always searching over subcultures
and competing for mind share.
And I think creating this malleability of culture
is super important for us to converge onto the cultures
and the heuristics about how to live one's life
that are updated to modern times.
Because there's really been a sort of vacuum
of spirituality and culture.
People don't feel like they belong to anyone group.
And there's been parasitic ideologies that have taken up opportunity to populate
this peachy dish of minds, right?
Elon calls it the mind virus.
We call it the D cell mind virus complex, which is the
deselerative that is kind of the overall pattern between all of them.
There's many variants as well.
And so, if there's a sort of viral pessimism,
deselerative movement, we needed to have not only one movement, but many variants. So it's very
hard to pinpoint and stop. But the overarching thing is nevertheless a kind of mimetic optimism pandemic.
So, I mean, okay, let me ask you, do you think EAC to some degrees of cult?
Define cult.
I think a lot of human progress is made when you have independent thought.
So you have individuals that are able to think freely.
And very powerful,
mimetic systems can kind of lead to group think. There's something in human nature that
leads to like mass hypnosis, mass hysteria where we start to think alike. Whenever there's a sexy idea that captures
your minds. And so it's actually hard to break us apart, pull us apart, diversify our
thought. So to that degree, to which degree is everybody kind of chanting, EAC, EAC, like
the sheep and animal farm? Well, first of all, it's fun. It's rebellious, right? Like, you know, many,
I think we lean into, there's a disconscept of sort of meta-irony, right, of sort of being on
the boundary of like, we're not sure if they're serious or not, and it's much more playful,
much more fun, right? Like, for example, we talk about thermodynamics being our God, right?
And sometimes we do cult-like things, but there's no, like, ceremony and robes and whatnot.
So, yeah. But ultimately, yeah, I mean, I totally agree that it seems to me that humans want
to feel like they're part of a group. So they naturally try to agree with their neighbors and find common ground.
And that leads to sort of mode collapse in the space of ideas.
We used to have sort of one cultural island that was allowed.
It was a typical subspace of thought.
And anything that was diverting from that subspace of thought was suppressed or re-recanceled. Now we've created a new mode, but the whole point is
that we're not trying to have a very restricted space of thought. There's not just one way to think
about EAC and it's many forks, and the point is that there are many forks and there can be many
clusters and many islands, and I shouldn't be in control of it in any way.
I mean, there's no formal org whatsoever.
I just put out tweets and certain blog posts.
And people are free to defect and fork if there's an aspect
they don't like.
And so that makes it so that there should be a sort of deterioration,
like, de-territorialization and the space of ideas so that we don't end up in one cluster
that's very cult-like. And so cults usually, they don't allow people to de-factor start competing
forks, whereas we encourage it, right? Do you think just the humor, the pros and cons of humor and meme?
In some sense, meme, there's like a wisdom to memes.
What is the magic theater? What book is that from? Harmon has a step on wolf, I think.
Harmon has a step on wolf, I think. But there's a kind of embracing of the absurdity that seems to get to the truth of things. But at the same time, it can also decrease the quality
and the rigor of the discourse. Do you feel the tension of that?
Yeah. So initially, I think what allowed us to grow under the radar was because it was camouflage this sort of
Meta-ironic, right? We would sneak in you know, I'd deep truths within a package of humor and humor and memes and
what are called shitposts, right?
and I think that was purposefully sort of camouflage against those that seek status and
do not want to, it's very hard to argue with a cartoon frog or a cartoon of an intergalactic Jeff Bezos, and take yourself seriously.
And so that allowed us to grow pretty rapidly
in the early days, but of course,
like that's essentially people get steered.
Their notion of the truth comes from the data they see
from the information they're fed.
And the information people are fed is determined by algorithms.
Right.
And really what we've been doing is sort of engineering what we call
high mimetic fitness packets of information
so that they can spread effectively and carry a message.
Right.
So it's kind of a vector to spread the message.
And yes, we've been using sort of techniques
that are optimal for today's algorithmically amplified
information landscapes.
But I think we're reaching the point of scale
where we can have serious debates and serious conversations.
And that's why we're considering doing a bunch of debates and having more serious long-form discussions.
Because I don't think that the timeline is optimal for very serious thoughtful discussions.
You get rewarded for polarization. And so even though we started a movement that is literally
trying to polarize the tech ecosystem at the end of the day,
it's so that we can have a conversation
and find an optimum together.
I mean, that's kind of what I try to do with this podcast,
given the landscape of things to still have long form
conversations, but there is a degree to which
absurdity is fully embraced. In fact, this very conversation is multi-level absurd. So first of all,
I should say that I just very recently had a conversation with Jeff Bezos. And I would love to hear your Beth J. Zoes' opinions of Jeff Bezos,
speaking of intergalactic Jeff Bezos. What do you think of that particular individual whom your
name is inspired? Yeah, I mean, I think Jeff is really great. I mean, he's built one of the most epic companies of all time.
He's leveraged the technical capital machine and technical capital acceleration to give us
what we wanted, right?
We want a quick delivery, very convenient at home, low prices, right?
He understood how the machine worked and how to harness it, right?
Like, running the company, not trying to take profits too early, putting it back, letting
the system compound and keep improving. And, you know, arguably, I think, Amazon's invested
some of the most amount of capital and robotics out there. And certainly, with the birth of AWS enabled the sort of tech
boom we've seen today that has paid the salaries of,
I guess, myself and all of our friends to some extent.
And so I think we can all be grateful to Jeff.
And he's one of the great entrepreneurs out there, one
of the best of all time, unarguably.
Of course, the work at Blue Origin, similar to the work at SpaceX, is trying to make
humans a multi-planned-air species, which seems almost like a bigger thing than the capital
machine, or it's a capital machine at a different time scale, perhaps? Yeah. I think that companies, they tend to optimize quarter over quarter, maybe a few years out,
but individuals that want to leave a legacy can think on a multi-decade ill or multi-century
time scale.
And so the fact that some individuals are such good capital allocators that the unlocked
ability to allocate capitals to goals that take us much further or much further looking,
you know, Elon's doing this with SpaceX putting all this capital towards getting us to Mars.
Jeff is trying to build Blue Origin and I think he wants to build onyl cylinders and get industry off planet,
which I think is brilliant.
I think just overall, I'm four billionaires.
I know this is controversial statements sometimes,
but I think that in a sense,
it's kind of a proof of stake voting, right?
Like if you've allocated capital efficiently,
you unlock more capital to allocate
just because clearly, you know how to allocate capital
more efficiently, which is in contrast
to politicians that get elected
because they speak the best on TV, right?
Not because they have a proven track record
of allocating taxpayer capital, most
efficiently.
And so that's why I'm for capitalism over, say, giving all our money to the government
and letting them figure out how to allocate it.
So yeah.
What do you think it's a viral and it's a popular meme to criticize billionaires, as you
mentioned, billionaires.
Why do you think there's quite a widespread criticism of people with wealth, especially
those in the public eye like Jeff and Elon and Mark Zuckerberg and who else, Bill Gates?
Yeah.
I think a lot of people would, instead of trying to understand how the technical capital machine works and realizing they have much more agency than
they think they'd rather have this sort of victim mindset, I'm just
subjected to this machine. It is oppressing me and the successful players
clearly must be evil because they've been successful at this game that I'm not successful
at.
But I've managed to get some people that were in that mindset and make them realize how
the techno-capital machine works and how you can harness it for your own good and for
the good of others.
By creating value, you capture some of the value you create for the world.
That sort of positive mindset shift is so potent.
And really, that's what we're trying to do by scaling EAC is sort of unlocking that
higher level of agency.
Like actually, you're far more in control of the future than you think.
You have agency to change the world, go out and do it.
You have here's permission.
Each individual has agency.
The model, keep building is often
heard. What does that mean to you? And what does it have to do with Diet Coke?
Well, Diet, by the way, thank you so much for the red bullets. It's working pretty well.
Feeling pretty good. Awesome. Well, so building technologies and building, it doesn't have to be technologies.
Just building in general means having agency trying to change the world by creating, let's
say, a company which is a self-sustaining organism that accomplishes a function in the broader
techno-capital machine.
To us, that's the way to achieve change in the world
that you'd like to see, rather than say,
pressuring politicians or creating nonprofits
that nonprofits once they run out of money,
their function can no longer be accomplished.
You're kind of deforming the market artificially
compared to sort of subverting or coursing the market
or dancing with the market to convince
it that actually this function is important as value and here it is. And so I think this
is sort of the way between the sort of degrowth, ESG approach versus say Elon, the degrowth
approach is like we're going to manage our way out of a climate crisis
and Elon is like, I'm going to build a company that is self-sustaining, profitable, and growing.
And it's, we're going to innovate our way out of this dilemma, right?
And, and we're trying to get people to do the latter rather than the former, at all
skills.
Elon is an interesting case.
So you are a proponent, you celebrate Elon, but he's also somebody
who has for a long time warned about the dangers, the potential dangers, existential risks
of artificial intelligence.
How do you square the two?
Is that a contradiction to you?
It is somewhat because he's very much against regulation in many aspects, but for AI, he's
definitely a proponent of regulations.
I think overall, he saw the dangers of, say, opening eye, cornering the market and then
getting to have the monopoly over the cultural priors that you can embed in these LLMs that then,
you know, as LLMs now become the source of truth for people, then you can shape the culture
of the people. And so you can control people by controlling LLMs. And he saw that just like it was
the case for social media, if you shape the function of information propagation,
you can shape people's opinions.
He sought to make a competitor.
So at least, I think we're very aligned there, the way to a good future is to maintain
sort of adversarial equilibrium between the various AI players.
I'd love to talk to him to understand sort of his thinking about how to advance AI
going forwards. I mean, he's also hedging his bets, I would say, with neural link, right? I think
if he can't stop the progress of AI, he's building the technology to merge. So,
look at the actions, not just the words, but well, I mean, there's some
degree where being concerned, maybe using human psychology, being concerned about threats
all around us as a motivator.
Like, it's an encouraging thing.
I operate much better when there's a deadline, the fear of the deadline.
Like, and I, for myself, create artificial things like, I want to create in myself this kind of anxiety, as if something really horrible will happen if I miss the deadline. And I, for myself, create artificial things like, I want to create in myself
this kind of anxiety, as if something really horrible will happen if I miss the deadline.
I think there's some degree of that here because creating AI that's aligned with humans has a lot
of potential benefits. And so a different way to reframe that is, if you don't, you're all going to die. It just seems to be a very
powerful psychological formulation of the goal of creating human aligned AI.
I think that anxiety is good. I think, like I said, I want the free market to create
aligned AI's that are reliable. And I think that's what he's trying to do with XAI. So I'm all
for it. What I am against is sort of stopping, let's say, the open source
ecosystem from thriving, right? By, let's say, in the executive order, claiming
that open source LMS or dual-use technologies and should be government
controlled.
Then everybody needs to register their GPU and their big matrices with the government. And I think that extra friction will disobeyed a lot of hackers from contributing hackers that
could later become the researchers that make key discoveries that push us forward, including discoveries for
AI safety.
And so I think I just want to maintain ubiquity of opportunity to contribute to AI and
to own a piece of the future.
It can't just be legislated behind some wall where only a few players get to play the
game. I mean, so the EAC movement is often sort of caricatured to mean sort of progress and
innovation at all cost.
It doesn't matter how unsafe it is.
It doesn't matter if it causes a lot of damage.
You just build, build cool shit as fast as possible.
Stay up all night with the diet coke, whatever it takes. I think I
guess, I don't know if there's a question in there, but how important to you and what
you've seen the different formulations of EAC is safety, is AI safety.
I think, again, I think like if there was no one working on it, I think I would be a proponent of it.
I think again, our goal is to sort of bring balance.
And obviously a sense of urgency is a useful tool,
to make progress.
It hacks our dopamine or reject systems
and gives us energy to work late into the night.
I think also having higher purpose, you're contributing to, right?
At the end of the day, it's like, what am I contributing to?
I'm contributing to the growth of this beautiful machine so that we can
seek to the stars. That's really inspiring. That's also a sort of
neuro hack.
So you're saying that safety is important to you, but right now
the landscape of ideas you see is AI
safety is a topic is used more often to gain centralized control. So in that sense, you're
resisting it as a proxy for centralized gaining centralized control.
Yeah, I just think we have to be careful because, you know, safety is just the perfect cover for sort of centralization
of power and covering up eventually corruption.
I'm not saying it's corrupted now, but it could be down the line.
And really, if you let the argument run, there's no amount of sort of centralization of control that
will be enough to ensure your safety.
There's always more 9-9-9s of p-safety that you can gain 99.99999 to percent.
Maybe you want another 9-0.
Please give us full access to everything you do, full surveillance. And frankly, those that are proponents of
the I-safety have proposed like having a global panopticon where you have centralized
perception of everything going on. And to me, that just opens up the door wide open for
a sort of big brother, 1984-like scenario. And that's not a future I want to live in. Because we know we have some examples throughout history when that did not lead to a good outcome.
Right. You mentioned you founded a company,
an extra that recently announced a 14.1 million seed round.
What's the goal of the company?
You're talking about a lot of interesting physics things.
So what what are you up to over there
that you can talk about? Yeah, I mean, you know, originally we weren't going to announce last
week, but I think with the doxing and disclosure, we got our our hand forced. So we we had to
disclose roughly what we were doing. But really, the topic was born from my dissatisfaction and that of my colleagues
with the quantum computing roadmap.
Quantum computing was sort of the first path to physics-based computing that was trying
to commercially scale.
I was working on physics-based AI that runs on these physics-based computers. But ultimately, our greatest enemy was this noise,
this pervasive problem of noise that,
as I mentioned, you have to constantly pump out the noise out of the system
to maintain this pristine environment where quantum mechanics can take effect.
That constraint was just too much. it's too costly to do that.
And so we were wondering, as Gen. of AI is sort of eating the world more and more of the
world's computational workloads or focused on Gen. of AI, how could we use physics to
engineer the ultimate physical substrate for generative AI,
from first principles of physics,
of information theory, of computation,
and ultimately of thermodynamics.
Right?
And so what we're seeking to build
is a physics-based computing system
and physics-based AI algorithms
that are inspired
by out of equilibrium thermodynamics,
we're harnessing directly to do machine learning
as a physical process.
So what does that mean when she learning
as a physical process is that hardware,
is it softwares both, is it trying to do the full stack
in some kind of unique way?
Yes, it is full stack.
And so we're folks that have built
differentiable programming
into the quantum computing ecosystem with TensorFlow Quantum.
One of my co-founders of TensorFlow Quantum
is the CTO Trevor McCourt.
We have some of the best quantum computer architects,
those that have designed IBMs and AWS's systems.
They've left quantum computing to help us build
what we call actually a thermodynamic computer.
A thermodynamic computer.
Well, actually, that's not going around TensorFlow Quantum.
What lessons have you learned from TensorFlow
Quantum? Maybe you can speak to what it takes to create essentially what software API to
a quantum computer.
Right. I mean, that was a challenge to build, to invent, to build, and then to get to run
on the real devices.
Can you actually speak to what it is?
Yeah, so TensorFlow Quantum was an attempt at, well, I mean, I guess we succeeded at combining
deep learning or differentiable classical programming with quantum computing and turn quonning computing into or have types of programs that are
Differentiable and quonning computing and you know Andre Andre Carpoth
They
Calls differentiable programming software 2.0 right? It's like gradient descent is a better programmer than you and
The idea was that in the early days of qu computing, you can only run short quantum programs.
And so which quantum programs should you run
while just let gradient to set,
find those programs instead.
And so we built sort of the first infrastructure
to not only run differentiable quantum programs,
but combine them as part of broader deep learning graphs incorporating deep neural networks,
you know, the ones you know and love with what are called quantum neural networks. And ultimately,
it was very across the disciplinary effort. We had to invent all sorts of ways to differentiate, to backpropagate
through the graph, the hybrid graph. But ultimately, it taught me that the way to program, matter,
and to program physics is by differentiating through control parameters. If you have parameters
that affect the physics of the system, you can evaluate some loss function, you can optimize
the system to accomplish a task, whatever that task may be. And that's a very
sort of universal meta framework for how to program physics-based computers.
So try to parameterize everything, make those parameters differential,
So try to parametrize everything, make those parameters differential, and optimize. Yes.
Okay.
So is there some more practical engineering lessons from TensorFlow Quantum?
Just organizationally too, like the humans involved and how to get to a product, how to create
good documentation, how to have, I don't know.
All of these little subtle things
that people might not think about.
I think working across disciplinary boundaries
is always a challenge.
And you have to be extremely patient in teaching one
another.
I learned a lot of software engineering
through the process.
My colleagues learned a lot of quantum physics
and some learned machine learning
through the process of building this system.
And I think if you get some smart people
that are passionate and trust each other in a room
and you have a small team and you teach each other,
your specialties, suddenly you're kind of forming
this sort of model soup of
expertise and something special comes out of that.
It's like combining genes, but for your knowledge bases and sometimes special products come
out of that.
I think even though it's very high friction initially to work in an interdisciplinary
team, I think the product at the end of the day is worth it.
And so, learned a lot, trying to bridge the gap there,
and I mean, it's still a challenge to this day.
You know, we hire folks that have an AI background,
folks that have a pure physics background,
and somehow we have to make them talk to one another, right?
Is there a magic, is there some science
and art to the
hiring process to building a team that can create magic together?
Yeah, it's really hard to pinpoint that that Gense quoi, right? The, I didn't know you
speak French. That's very nice. Yeah, I'm actually French Canadian. So you are legitimately French.
I thought you were just doing that for the, for the, for the cred.
No, no, I'm truly French Canadian from Montreal.
But yeah, essentially we look for people with very high fluid intelligence that aren't
over specialized because they're going to have to get out of their comfort zone.
They're going to have to incorporate concepts that they've never seen before
and very quickly get comfortable with them, right, or learn to work in a team.
And so that's sort of what we look for when we hire.
We can't hire people that are just like optimizing this subsystem for the past three or four years, we need really general sort
of broader intelligence and specialty.
And people that are open-minded, really, because if you're pioneering a new approach from
scratch, there is no textbook, there's no reference.
It's just us.
And people that are hungry to learn.
So we have to teach each other, we have to learn the literature,
we have to share knowledge bases, collaborate in order to push the boundary of knowledge further
together. Right? And so people that are used to just getting prescribed what to do at this stage
when you're at the pioneering stage, that's not necessarily who you want to hire.
So you mentioned with the extravagant,
you try to build the physical substrate for generative AI.
What's the difference between that and the AGI, AI itself?
So is it possible that in the halls
of your company, AGI will be created,
or will AGI just be using this as a substrate?
I think our goal is to both run human-like AI or anthropomorphic AI.
Sorry, for use of the term AGI.
I know it's triggering for you.
We think that the future is actually physics-based AI combined with anthropomorphic AI. So you can imagine I have a sort of world modeling
engine through physics based AI. Physics based AI is better at representing the world at all scales
because it can be quantum mechanical, thermodynamic, deterministic, hybrid representations of the world,
just like our world at different scales, has different regimes
of physics.
If you inspire yourself from that, in the ways you learn representations of nature, you
can have much more accurate representations of nature.
So you can have very accurate world models at all scales.
Right?
So you have the world modeling engine, and then you have the sort of anthropomorphic AI
that is human-like, so you can have the
science, the playground to test your ideas, and you can have a synthetic scientist.
And to us, that joint system of a physics-based AI and an anthropomorphic AI is the closest
thing to a fully general, artificial intelligence system.
So you can get closer to truth by grounding the AI to physics,
but you can also still have a anthropomorphic interface to us humans that like to talk to other
humans or human-like systems. So on that topic, what do you... I suppose that is one of the big limitations of current large language models to you is that
they're not, they're good bullshitters. They're not really grounded to truth necessarily.
Is that, would that be fair to say? Yeah. No, I, you wouldn't, you know, try to extrapolate the
stock market with an LM trained on text from the internet, right? It's not going to be a very
accurate model. It's not going to be a very accurate model.
It's not going to model its priors or its uncertainties about the world very accurately,
right?
So you need a different type of AI to complement sort of this, this, this text extrapolation
AI.
Yeah.
You mentioned singularity earlier.
What, how far away are we from a singularity?
I don't know if I believe in a finite time singularity
as a single point in time.
I think it's going to be asymptotic and sort of a diagonal
sort of asymptote.
Like, you know, we have the light cone.
We have the limits of physics restricting our ability
to grow.
So obviously, you can't fully diverge on a finite time.
I think my priors are that, I think a lot of people on the other side of the aisle think that once
we reach human level AI, there's going to be an inflection point in a sudden like FOOM, like
be an inflection point in a sudden like FOOM, like suddenly AI is going to grok how to manipulate matter at the nanoscale and assemble nanobots and having worked for nearly a decade in applying
AI to engineer matter, it's much harder than they think.
In reality, you need a lot of samples from either a simulation of nature that's very accurate
and costly or nature itself.
And that keeps your ability to control the world around us in check.
There's a sort of minimal cost computationally and thermodynamically to requiring information
about the world in order to be able to predict and control it.
And that keeps things in check.
It's funny you mentioned the other side of the aisle.
So in the poll I posted about P-Dume yesterday, what's the probability of doom?
There seems to be a nice division between people think it's very likely and very unlikely.
I wonder if in the future they'll be the actual Republicans versus Democrats division,
blue versus red, is the, yeah, numerous versus the
e-acres.
Yeah.
So this movement, you know, is not right wing or left wing fundamentally.
It's more like up versus down in terms of the, it's clearly up.
Okay.
Civilization, right?
All right.
But it seems to be like there is sort of case of alignment of the existing political parties where
those that are for more centralization of power control and more regulations
are aligned with sort of
aligning themselves with the dooms because that sort of
instilling fear in people is a great way to, for them to give up more control and give
the government more power. But fundamentally, we're not left versus right. I think there's,
we've done polls of people's alignment with any act. I think it's pretty balanced.
So it's a, it's a new fundamental issue of our time. It's not just centralization versus
decentralization. It's kind of, do we go,
it's like tech progressiveism versus techno conservatism, right?
So, Iac is, as a movement is often formulated in contrast to EA, effective altruism.
What do you think are the pros and cons of effective altruism. What's interesting and insightful to you about them and what is negative.
Right. I think I think like people trying to do good from first principles is good.
We should actually say, and sorry to interrupt, we should probably say that you can correct me if I'm wrong, but effective altruism
is a kind of movement that's trying to do good optimally.
We're good.
It's probably measured something like the amount of suffering in the world.
You want to minimize it.
And there's ways that that can go wrong as any optimization can. And so it's interesting to explore
like how things can go wrong. We're both trying to do good to some extent and we're both
trying, we're arguing for which loss function, which you use, right? Yes.
Their loss function is sort of heat ons, right? Units of heat andism, like how,
is sort of heat ons, units of heat anism, like how good do you feel and for how much time,
and so suffering would be negative heat ons
and they're trying to minimize that.
But to us, that seems like that loss function
has sort of a spurious minima, right?
You can start minimizing shrimp farm pain, right? Which seems not that productive to me.
Or you can end up with wireheading where you just either install a neural link or you scroll
TikTok forever and you feel good on a short term time scale because you're neurochemistry.
But on long term time scale it causes decay
and death, right? Because you're not being productive. Whereas sort of,
EAC, measuring progress of civilization, not in terms of a subjective loss function,
like, heenonism, but rather an objective measure, a quantity that cannot be gained that is physical energy,
right? It's very objective, right? And there's not many ways to gain it. If you did it in terms
of like GDP or a currency, that's pinned to certain value that's moving, right? And so that's not
a good way to measure our progress. And so, but the thing is we're both trying to make progress
and ensure humanity flourishes and gets to grow.
We just have different loss functions
and different ways of going about doing it.
Is there a degree maybe you can educate me, correct me?
I get a little bit skeptical
when there's an equation involved trying to reduce all of
the human civilization, human experience to an equation. Is there a degree that we should
be skeptical of the tyranny of an equation of a loss function over which to optimize, like
having a kind of intellectual humility about optimizing over loss functions.
Yeah, so this particular loss function,
it's not stiff, it's kind of an average of averages, right?
It's like distributions of states in the future
are going to follow a certain distribution.
So it's not deterministic.
It's not like, we're not on like stiff rails, right?
It's just a statistical statement about the future.
But at the end of the day, you can believe and graph it or not,
but it's not necessarily an option to obey it, right?
And some people try to test that and that goes,
not so well.
So similarly, I think thermodynamics
is there whether we like it or not
and we're just trying to point out what is
and try to orient ourselves
and try to path forward given this fundamental truth.
But there's still some uncertainty.
There's still lack of information.
Humans tend to fill the gap of the lack of information
with narratives.
And so how they interpret, even physics
is up to interpretation when there's uncertainty involved.
And humans tend to use that to further their own means.
So whenever there's an equation, it just seems like, until
we have really perfect understanding of the universe, humans will do what humans do.
And they try to use the narrative of doing good to fool the populace into doing bad. I guess that this is something that should be skeptical about
in all movements. That's right. So we invite skepticism. Do you have an understanding of what
to a degree that went wrong? What do you think may have gone wrong with the effect of altruism
did you agree that went wrong? What do you think may have gone wrong with effective altruism
that might also go wrong with effective accelerationism?
Yeah, I mean, I think it provided initially a sense of community for engineers and intellectuals and rationalists in the early days and seems like the community was very healthy, but then
and the early days and it seems like the community was very healthy, but then they formed all sorts of organizations and started routing capital and having actual power.
They have real power, the influence government, the influence most AI orgs now.
They're literally controlling the board of OBI and look over to Anthropic.
I think they have some control over that too.
And so I think, you know, the assumption of EAC is more like capitalism is that every agent,
organism and meta-organism is going to act in its own interests, and we should maintain sort of
adversarial equilibrium or adversarial competition to keep each other and check at all times, at all scales.
I think that, yeah, ultimately, it was the perfect cover to acquire tons of power and capital
and unfortunately, sometimes that corrupts people over time.
What does a perfectly productive day, since building is important?
What does a perfectly productive day in the life of Guillaume Verdun look like?
How much caffeine do you consume?
Like what was the perfect day?
Okay.
So I have a particular regiment.
I would say my favorite days are 12 PM to 4 AM.
And I would have meetings in the early afternoon, usually external meetings, some internal meetings, because I'm CEO, I have to interface with the outside of world, whether
it's customers or investors or interviewing potential candidates. And usually I'll have ketones, uh, exogenous ketones.
Um, so you were on a keto diet or just, I've done keto before for, uh, football and whatnot.
Um, but I, I, I like to, uh, have a meal after sort of part of my day is, is done.
And so I can just have extreme focus.
You do the social interactions,
literally in the day, without food.
Frontload them, yeah, like right now I'm on ketones
and red ball.
Yeah.
And it just gives you a clarity of thought
that is really next level.
Because then when you eat, you're actually allocating
some of your energy that could be going to neural energy to your digestion. After I eat, maybe I take a break an hour or so,
hour and a half and then usually it's like, ideally one meal a day, like steak and eggs and
vegetables, animal-based primarily, so fruit and meat. And then I do a second win, usually, that's deep work.
Right?
Because I am a CEO, but I'm still technical.
I'm contributing to most patents.
And they're all just stay up late into the night and work with engineers on very technical
problems.
So that's like the 9 p.m. to 4 a.m. whatever, though, that range of time. Yeah.
Yeah, that's the perfect time. The emails, the things that are on fire, stop tripling in,
you can focus, and then you have your second wind. And I think Demis Saba has a similar
workday to some extent. So I think that that's definitely inspired my workday.
But yeah, I started this work day when I was at Google and had to manage a bit of the product
during the day and have meetings and then do technical work at night. Exercise, sleep,
those kinds of things. Yeah. Said football. You used to play football. Yeah, I used to play American
football. I've done all sorts of sports growing up and then
I was into powerlifting for a while. So when I was studying mathematics and grad school, I would just
you know, do math and lift, take caffeine and that was my day. It was very pure, the purest of
monk modes. But it's really interesting how in powerlifting you're trying to cause neural
adaptation by having certain driving signals and you're trying to cause neural adaptation by
having certain driving signals and you're trying to engineer a neuroplasticity through
all sorts of supplements.
And you know, you have all sorts of brain-derived neurotrophic factors that get secreted when
you lift.
So it's funny to me how I was trying to engineer neural adaptation in my nervous system more broadly, not just
my brain, while learning mathematics.
I think you can learn much faster if you really care, if you convince yourself to care a lot
about what you're learning.
And you have some sort of assistance, let's say caffeine or some
colonergic supplement to increase neuroplasticity. I should chat with Andrew
Hooberman at some point. He's the expert, but yeah, at least to me it's like, you
know, you can try to input more tokens into your brain, if you will, and you can
try to increase the learning rate so that you can learn much faster on a shorter time scale.
So I've learned a lot of things.
I followed my curiosity.
You're naturally, if you're passionate about what you're doing, you're going to learn faster,
you're going to become smarter faster.
And if you follow your curiosity, you're always going to be interested.
And so I advise people to follow their curiosity and don't respect the boundaries of certain
fields or what you've been allocated in terms of lane of what you're working on.
Just go out and explore and follow your nose and try to acquire and compress as much information
as you can into your brain, anything that you find interesting.
And caring about a thing, like you said, which is interesting, it works for me really
well, is like tricking yourself that you care about a thing, and like you said, which is interesting, it works for me really well, it's like tricking yourself to you care about a thing.
Yes.
And then you start to really care about it.
Yep.
So it's funny, the motivation is a really good catalyst
for learning.
Right.
And so at least part of my character,
as Beth J. Zoes, it's kind of like,
yeah, the hype man.
Yeah, it's just hype, but I'm like,
hyping myself up, but then I just tweet about it.
Yeah.
And it's just when I'm trying to get really hyped up
and like an altered state of consciousness
where I'm like ultra focused in the flow wired,
trying to invent something that's never existed.
I need to get to like unreal levels of like excitement,
but your brain has these levels of cognition that you can unlock
with like higher levels of adrenaline and whatnot. And I mean, I've learned that in powerlifting
that actually you can engineer a mental switch to like increase your strength, right? Like if you
can engineer a switch, maybe you have a prompt like a certain song or some music where suddenly you're like fully primed, then you're at max maximum strength,
right? And I've engineered that switch through years of lifting. If you're going to get under
500 pounds and it could crush you, if you don't have that switch to be wired in, you might die.
So that'll wake you right up.
And that sort of skill I've carried over to like research
when it's go time, when the stakes are high,
somehow I just reach another level of neural performance.
So BFJZL is your sort of embodiment
of representation of your intellectual Hulk.
It's productivity Hulk.
That they just turn on. What have you learned about the nature
of identity from having these two identities? I think it's interesting for people to be able to
put on those two hats explicitly. I think it was interesting in the early days. I think in the
early days, I thought it was truly compartmentalized. Like, oh, yeah, this is a character. You know,
I'm Guillaume. Buff is just the character out there.
I like, I like take my thoughts and then I extrapolate them to a bit more extreme.
But, you know, over time, it's kind of like both identities were starting to merge
mentally and people were like, no, you are, I met you.
You are, Beth, you are not just Guillaume.
And I was like, wait, am I?
And now it's like fully merged,
but it was already, before the docs was already starting
mentally that, you know, I am this character.
It's part of me.
Would you recommend people sort of have an alt?
Absolutely.
Like young people, would you recommend them
to explore different identities by having alt accounts?
It's fun. It's like writing an essay and taking a position. You do this in the bait. You
can have experimental thoughts. By the stakes being so low, because you're an annicke
with, I don't know, 20 followers or something, you can experiment with your thoughts in a
low stakes environment. I feel like we've lost that in the era of everything
being under your main name, everything being attributable
to you.
People just are afraid to speak, explore ideas that aren't
fully formed, right?
And I feel like we've lost something there.
So I hope platforms like Axe and others
really help support people trying to stay
synonymous or anonymous anonymous because it's really
important for people to share thoughts that aren't fully formed and converge onto maybe hidden
truths that were hard to converge upon if it was just through open conversation with real names.
Yeah, I really believe in like not, not radical, but rigorous empathy.
It's like really considering what it's like to be a person of a certain viewpoint and
like taking that as a thought experiment farther and farther and farther.
And one way of doing that is an all-to-count.
That's a fun, interesting way to really explore what it's like to be a person that believes
a set of beliefs.
And taking that across a span of several days, weeks, months, of course there's always
the danger of becoming that.
That's the Nietzsche gaze long into the abyss.
The abyss gaze into you.
You have to be careful.
Freaking Beth. Yeah, right.
Yeah, you wake up with a shaved head one day, just like who am I?
Whatever I become. So you mentioned quite a bit of advice already, but what
advice would you give to young people of how to, in this interesting world we're in, how
to have a career and how to have a life, they can be part of.
I think to me, the reason I went to theoretical physics was that I had to learn the base of
the stack that was going to stick around no matter how the technology changes, right?
And to me, that was the foundation upon which then I later built engineering skills and
other skills.
And to me, the laws of physics, you know, it may seem like the landscape right now is
changing so fast that it's disorienting, but certain things like fundamental mathematics
and physics aren't going to change. And if you have that knowledge
and knowledge about complex systems and adaptive systems, I think that's going to carry you very far.
And so not everybody has to study mathematics, but I think it's really a huge cognitive unlock
to learn math and some physics and engineering. Get as close to the base of the stack as possible
Yeah, that's right because because the base of the stack doesn't change everything else
You know your knowledge might become not as relevant in a few years
Of course, there's a sort of transfer learning you can do, but then you have to always transfer learn
Constantly, I guess the closer you are to the base of the stack the easier the the easier the transfer learning, the shorter the jump.
Right. Right. And you'd be surprised, like, once you've learned concepts in many physical
scenarios, how they can carry over to understanding other systems that are necessarily physics. And
I guess, like, the EAC writings, you know, the principles and tenet post that was based on physics, that was kind
of my experimentation with applying some of the thinking from out of equilibrium thermodynamics
to understanding the world around us.
And it's led to EAC in this movement.
If you look at your one cog in the machine, in the capitalist machine, one human. And if you look at yourself, do you think mortality is a feature or a bug?
Like, would you want to be immortal?
No. I think fundamentally in thermodynamic,
dissipative adaptation, there's the word dissipation.
Disappation is important. Death is important. Right. We have a saying in physics physics progresses
one funeral at a time. Yeah. I think the same is true for capitalism companies
empires
People everything everything must die at some point. I
Think that we should probably extend our lifespan because we need a longer period
of training because the world is more and more complex, right? We have more and more data
to really be able to predict and understand the world. And if we have a finite window
of higher neuroplasticity, then we have sort of a hard cap and how much we can understand about our world.
So, I think I am for death because, again, I think it's important, if you have a king
that would never die, that would be a problem.
The system wouldn't be constantly adapting.
You need novelty, you need youth, you need disruption to make sure the system's always adapting and malleable.
Otherwise, if things are immortal,
if you have, let's say, corporations that are there forever
and they have them not believed, they get calcified,
they become not as optimal, not as high fitness
and a changing, time varying landscape.
And so death gives space for youth and novelty
to take its place.
And I think it's an important part
of every system in nature.
So yeah, I am for death, but I do think
that longer lifespan and longer time for neuroplasticity,
bigger brains, what should be something we
should strive for. Well, in that Jeff Bezos and Bev J. Zell's agree that all companies die.
For Jeff, the goal is to try to, he calls it day one thinking, try to constantly, for as long
as possible, reinvent, sort of extend the life of the company,
but eventually it too will die.
Because it's so damn difficult to keep reinventing.
Are you afraid of your own death?
I think I have ideas and things I'd like to achieve in this world before I have to go, but I don't think
I'm necessarily afraid of death.
You're not attached to this particular body in mind that you got.
No, I think I'm sure there's going to be better versions of myself in the future or forks, right? Genetic forks or other, right? I truly believe that. I think there's
a sort of evolutionary like algorithm happening at every bit or not in the world is sort of
adapting through this process that we describe in an EAC.
And I think maintaining this adaptation
in my liabilities, how we have constant optimization
of the whole machine.
And so I don't think I'm particularly, you know,
an optimum that needs to stick around forever.
I think there's gonna be greater optimal in many ways.
What do you think is the meaning of it all?
What's the Y of the machine, the EAC machine? The why? Well, the why is thermodynamics. It's why we're here. It's what has led
to the formation of life and of civilization, of evolution of technologies and growth of
civilization. But why do we have thermodynamics?
Why do we have our particular universe?
Why do we have these particular hyperparameters,
the constants of nature?
Well, then you get into the anthropic principle,
in the landscape of potential universes,
right, we're in the universe that allows for life.
And then why is there potentially many universes?
I don't know, I don't know that part, And then why is there potentially many universes?
I don't know, I don't know that part, but could we potentially engineer new universes
or create pocket universes and set the hyper parameters?
So there is some mutual information between our existence
and that universe and we'd be somewhat its parents.
I think that's really, I don't know, that'd be very poetic.
It's purely conjecture, but again, this is why figuring out quantum gravity
would allow us to understand if we can do that.
And above that, why is it all seems so beautiful and exciting?
The quest to figuring out quantum gravity seems so exciting.
Why, why is that?
Why are we drawn to that?
Why are we pulled towards that?
Just that puzzle solving creative force
that underpins all of it, it seems like.
I think we seek just like an LLM seeks to minimize
cross entropy between its internal model in the world.
We seek to minimize, yeah, the statistical divergence between our predictions in the world. We seek to minimize the statistical divergence
between our predictions in the world and the world itself.
And having regimes of energy scales or physical scales
in which we have no visibility to predict or perceive,
that's an insult to us.
And we want to be able to understand the world better in order to
best steer it or steer us through it. And in general, it's the capability that has evolved because
the better you can predict the world, the better you can capture utility or free energy towards your own
sustenance and growth. And I think quantum gravity, again, is kind of the final boss in terms of knowledge acquisition,
because once we've mastered that, then we can do a lot potentially.
But between here and there, I think there's a lot to learn in the mesoscales.
There's a lot of information to acquire about our world and a lot of
engineering, perception, prediction, and control to be done,
to climb up the Kardashev scale. And to us, that's the great challenge of our times.
And when you're not sure where to go, let the meme pave the way.
Guillaume, Beth, thank you for talking today.
Thank you for the work you're doing.
Thank you for the humor and the wisdom you put into the world.
This was awesome.
Thank you so much for having me, Lex.
It's pleasure.
Thank you for listening to this conversation with Guillaume Verdun.
To support this podcast, please check out our sponsors in the description.
And now, let me leave you with some words from Albert Einstein.
If at first the idea is not absurd, then there is no hope for it.
Thank you for listening.
I hope to see you next time. you