a16z Podcast - AI and Accelerationism with Marc Andreessen
Episode Date: August 22, 2025Marc Andreessen, cofounder Andreessen Horowitz, joins the Hermitix podcast for a conversation on AI, accelerationism, energy, and the future.From the thermodynamic roots of effective accelerationism (...E/acc) to the cultural cycles of optimism and fear around new technologies, Marc shares why AI is best understood as code, how nuclear debates mirror today’s AI concerns, and what these shifts mean for society and progress. Timecodes:0:00 Introduction 0:51 Podcast Overview & Guest Introduction1:45 Marc Andreessen’s Background3:30 Technology’s Role in Society4:44 The Hermitix Question: Influential Thinkers8:19 AI: Past, Present, and Future10:57 Superconductors and Technological Breakthroughs15:53 Optimism, Pessimism, and Stagnation in Technology22:54 Fear of Technology and Social Order29:49 Nuclear Power: Promise and Controversy34:53 AI Regulation and Societal Impact41:16 Effective Accelerationism Explained47:19 Thermodynamics, Life, and Human Progress53:07 Learned Helplessness and the Role of Elites1:01:08 The Future: 10–50 Years and Beyond Resources:Marc on X: https://x.com/pmarcaMarc’s Substack: https://pmarca.substack.com/Become part of the Hermitix community:On X: https://x.com/HermitixpodcastSupport: http://patreon.com/hermitixFind James on X: https://x.com/meta_nomad Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
It's not surprising that natural selection is so oriented around replication, because replication is the easiest way to generate more structure.
The universe wants us to basically be alive. The universe wants us to become more sophisticated.
The universe wants us to replicate. The universe feeds us and essentially a limited amount of energy and raw materials with which to do that.
Yes, we dump entropy out the other side, but we get structure and life to basically compensate for that.
That's the thermodynamic underpins of effective accelerationism.
Today, we're running a drop from the hermetics podcast featuring a conversation with Mark Andresen.
In this episode, they discuss AI, Accelerationism, Effective Accelerationism, or EAC, Energy, The Future of Technology, and Much More.
Let's get into it.
This episode, I'm joined by Mark Andresen to discuss accelerationism, AI, technology, the future, energy, and more.
I'd like to say a big thank you
to all my paying patrons and subscribers
for making all of this work possible
and if you'd like to support the podcast
as it runs off patronage alone
then please find links in the description below
otherwise please enjoy
so Mark andres
thanks very much for joining us on
Hermetics podcast
Hey James
thanks for having me
we are going to be discussing
accelerationism
AI technology
the future
technology is
is probably the key one here, I think.
But I want to basically begin with probably something
that on the usual podcast you go on,
you probably unask, like a lot of people will know who you are,
but in the sphere that I'm working, people might not.
So, yeah, just tell us a little bit about yourself
and what it is what it is you do before we get started here.
Yeah, so I'm probably the polar opposite from your usual guests.
Exactly.
So I'm bringing diversity to your production.
So I'm an engineer.
So my background, I'm an engineer.
So I'm a computer programmer, computer science, computer engineer by background.
It was training kind of the old school of computer science where they kind of teach you, you know, every layer of the system, including hardware and software.
So, and then I was a programmer and then an entrepreneur in the 90s and, you know, probably my main kind of claim to fame as I was sort of present at the creation of what today you'd consider the internet, sort of the modern consumer internet.
people use um and so my work uh first at the university of illinois and then later at a company i
co-founded called netscape uh you know sort of popularized the idea of of ordinary people being
online um you know and then and then uh you know helped to build what what today you experience
is the modern web browser and kind of the modern internet experience um and then i was involved
you know kind of uh through a a broad range of uh of silicon valley you know kind of waves over the
course of the next 20 years in the in the 90s and 2000s including
you know, cloud computing, where I started a company in and social networking.
I started a company in.
And then in 2009, I started with my long-time business partner.
I started a venture capital firm.
And our firm, which is called Andresen Horowitz, is now kind of one of the firms at the center of funding, you know, all of the new generations of technology startups.
And maybe the main thing I kind of, you know, kind of underline there is just, you know, technology, you know, quote unquote technology, high tech, computer technology in particular.
you know, kind of used to be, you know, it's always been kind of interesting and important in the economy for the last 50 years or something. You know, in the last 15 years, you know, technology has, I think a lot of people kind of feel that like technology has really spread out and it has become, you know, integral to many more aspects of life. And so my firm today finds itself very involved in the application of technology to, you know, everything from, you know, education, housing, energy, you know, national defense, national security.
you know, as well as kind of every, you know, possible, you know,
kind of artificial intelligence, robotics,
you know, kind of every different dimension
of how you might touch technology in your life.
Mm-hmm, mm-hmm.
And you picked up on something that will come into the conversation
in a couple of questions time,
but this notion of you is basically completely opposite
to the majority of guests, not in a bad way,
but often it's a lot of philosophy and which, you know,
theory and not practice.
And also this notion of technology in relation to either pessimistic,
or optimism. And this is super, super key, I think, for the ongoing atmosphere of really the
west, of where we're going to end up. But before we get to these questions, I mean, this is a
question I'm slowly phasing out, but I think it will work for the sake of our conversation because
we're talking more broadly around themes. I know you've listened to the podcast before, so it is
the hemidics question. You can place three thinkers living or dead into a room and listen in on
the conversation. Who do you pick? Yeah, I think that, um, I
Maybe I'll give you two versions of the answer, and then maybe I could combine them.
So there's kind of the timeless answer.
And, you know, the timeless answer would be something like, you know, Plato or Socrates,
Socrates, and then Nietzsche.
And then maybe I'd throw in, you know, one of your, you know, kind of one of your,
one of your favorite people, Nickland, I think would be interesting.
You know, the somewhat more applied version of that would be something a lot, you know,
and this is sort of maybe a little bit more topical these days with this movie,
Oppenheimer that just came out.
But, you know, it's like John von Neumann, you know, who was one of the co-inventors of both,
both the atomic bomb and the computer, you know, Alan Turing, who became famous a few years ago
with another movie, the imitation game, you know, and then let's throw on an Oppenheimer there
also because those three guys were sort of, you know, present at the creation of what we would
consider to be the modern technological world, you know, including, including literally those guys
were at the center, you know, especially von Neumann and Turing, were at the center of both, you know,
World War II, the atomic bomb, you know, the sort of information warfare, you know, the whole,
you know, kind of decryption, you know, kind of phenomenon, which really, you know, a lot of people
think won World War II, along with, you know, along ultimately with the A bomb. And then, and then also,
you know, then also right precisely at that time with those people, the birth of the computer
and everything that followed. So is that more of a practical room for you or in terms of like
a vision, a vision going forward into the future? Or is there something else going on there between
those sort of six figures.
Those guys were very, it's almost impossible to understate,
overstate how smart and visionary and far seeing they were like,
you know, there's actually the Von Neumann biography came out recently called The Man
from the Future and, you know, in anything like, Von Neumann is a more interesting character
than Oppenheimer in a lot of ways because he just, he touched a lot more of these fields.
And of the people who knew them that, you know, Von Neumann was always considered that
he was the smartest of what were called the Martians at that time, right,
the sort of group of super geniuses that originated in Hungary in that era.
And so, you know, look, they were very, very conceptual thinkers.
I'll just give you one example of how conceptual they were, how profoundly smart they were.
So they basically birth the idea of artificial intelligence right in the middle of,
right in the middle of the heat of World War II.
Like the minute they created the computer, like they created the computer, right?
They created like the electronic computer, as we know it today, in the heat of World War II.
And then they immediately said, aha, this means we can build electronic brains.
and then they immediately began theorizing
and developing designs for artificial intelligence
and in fact the core algorithm
of artificial intelligence is this idea of neural networks
which is this idea of a computer architecture
that sort of mirrors in some ways
the sort of mechanical operation of the human brain
you know that was literally an idea
from that era in the early 1940s
there was a paper two other guys
who were in this world wrote a paper
in 1943 outlining the theory of neural networks
and that literally is the same technology
that is the core idea behind
what you see when you use Chad GPT today
80 years later
and so there was a very, very
deep level of intellectual and philosophical
I don't know what it is
like they tapped into
or discovered or developed a very deep well
that we're still drawing out of today
I was going to yeah I was going to ask that
immediately but you covered it
I mean is there any significant changes between
AI then and AI now or is it
really just a matter of practicality
like we've got the
we've got more
resources and more ability to create it.
Yeah, we're at this fairly shocking moment.
So for people who have been following this, basically, it's this,
it's one of these amazing things where it's like, there's like this 80 year
overnight success that all of a sudden is paying off.
And so it's, you know, there were, you know, there were 80 years of scholars and
researchers and projects and attempts to build electronic brains.
And like every step of the way people thought that they were super close, you know,
there was this famous seminar on the campus of, I think it was Dartmouth University in like
1956 where they got this grant to spend
10 weeks together. They'd get all the AI scientists
together in 1956 because they thought after that
they'd have AI and you know
it turned us they didn't and so
so so it's what but like
it's starting to work right and so
when you use chat GPT today or you use
on the artistic side you're something
the journey or stable diffusion like you're seeing
the the payoff from that
I think the way to think about it is
it's it's the deep thinking that took place up front
it's and then
you know just obviously tremendous amount
of scientific and technological thinking
and development, you know, an elaboration
that took place since then.
But then there's two other kind of key things
that are making AI work today
that are kind of, and there's sort of, again,
there's sort of a combination of incremental
but also step function breakthroughs along the way.
So one is data.
And so just like, it turns out
a big part of getting a neural network to work
is feeding it enough data.
And so, you know, and the analogy is irresistible, right?
It's like, you know, if you're trying to educate a student,
right, you want to feed them,
you know, feed them in a lot of material in the, in the human world also.
And so it just turns out there's, there's this thing with neural networks and data where,
as they say, you know, quantity has a quality all its own.
And you really needed actually the internet to get to the scale of data.
You needed internet scale data, you know, you needed the web to generate enough text data.
You needed like, you know, Google images and YouTube to generate enough video and imagery to be able to train.
So we're kind of getting a payoff from the internet itself, you know, combined with neural networks.
And then the third is the advances in,
semiconductors. And, you know, and this is, you know, sort of the famous Moore's law. But, you know,
it's this, this phenomenon that, you know, that kind of we refer to as, you know, quote, unquote,
teaching sand to think. So kind of this idea, right, that you can literally convert, you know,
you know, silicon, you know, sand, rocks into, you know, into chips and then ultimately
into brains is kind of this amazing thing. And actually, as, I don't know if you follow this stuff,
But as we're recording right now,
there's this, like, amazing phenomenon happening
in the world of semiconductors and physics right now,
which is there's this,
we may be right now,
we may have just discovered the first room temperature
superconductor.
I've been seen this, but I'm not smart enough.
Can you give me a brief overview of why this is so important?
I mean, I'm guessing, is this a resource input issue?
So basically, every time you build a circuit today, right?
Every time you build any kind of circuit, a wire, a chip,
you know, anything like that, an engine, a motor,
you know, you have basically this process.
And by the way, this actually relates to the philosophy of accelerations
and we'll talk about, but you have this sort of thermodynamic process
where you're taking in energy on the one side, right?
And then you have a system, right, like an electrical transmission line
or a computer chip or something.
You have a system that's basically using that energy to accomplish something.
And then that system is inefficient,
and that system is dumping heat out the other end.
And, you know, and this is why when you use your computer, you know, if you got, you know, an older laptop computer, you know, the fan turns on at a certain point.
If you have a newer laptop computer, it just starts to get hot, you know, you probably knows your phone starts to get hot, you know, batteries every once in a while do what they call the cookoff.
You know, they, lithium on batteries will explode, right? Like there's always some, there's, there's, there's a byproduct of heat and therefore, you know, sort of increased entropy kind of coming out the other side of any sort of electrical or mechanical system.
And that's just because with, you know, kind of running energy through wires of any kind, you, you just have a level of inefficiency.
By the way, human body does the same thing, right?
Like, you know, we take in, you know, energy and then we, you know, we're sitting here, you know, we don't feel it, but we're sitting here humming along at, you know, whatever, 98.6 degrees Fahrenheit, you know, significantly higher than than room temperature because, you know, we're generating, or our actual biochemical process of life, right?
Bioelectrical is generating heat and dumping it out.
Anyway, so the idea of the superconductor is basically think about, in the abstract, as a wire that basically transmits information without, you know, with basically perfect fidelity, you know, perfect conservation of energy without dumping any heat into the environment. And it turns out that if you could do that, if you do that at room temperature, then all of a sudden you can have like, you know, basically like, you know, incredibly more efficient, you know, kinds of batteries, electrical transmission, motors, you know, computer chips. And so you can start to think about, for example,
Well, just an example people talk about is if you cover the Sahara Desert and solar panels, you know, you could power, you know, basically the entire planet's, you know, energy needs today. The problem is there's no way to transmit that, you know, transfer that power from the Sahara to the rest of the world with existing transmission line technology with superconducting transmission lines all of a sudden you could. You know, quantum computers, you know, today they exist, but they're sharply limited because they have to be operated at these, you know, super cool temperature.
you know, in these very carefully constructed labs, you know, with superconductors in theory,
you have desktop quantum computers, you know, you have levitating trains, you've got, you know,
you've just, you have a very broad cross-section, you know, you have handheld MRIs, right?
Like every doctor, every nurse, you know, has an MRI and they can just take a scan wherever
they need to, you know, on the fly, you know, and like a, like the Star Trek, you know,
the tricorder, you know, kind of thing.
And so anyway, it's fascinating.
So sitting here today, there's the reports of this breakthrough,
and there are these sort of almost these almost UFO style videos of this material
levitating, where it's not supposed to be levitating as a consequence of this breakthrough.
And there are betting markets on scientific progress.
And the betting markets, as of this morning, have the odds of this being a real breakthrough
at exactly 50-50.
Not the worst odds.
No, but it's funny.
If you think about it, it's funny, because it's our entire world right now,
from a physics standpoint, it's like Schrodinger's cat.
Like, we live in a, we live sitting here today in a superposition of two worlds,
one in which we now have room temperature, sub-connectors, and one of which we don't.
And people are, and, you know, these are radically different potential future for humanity, right?
And so if it turns out, it's true, you know, it's an amazing stuff function breakthrough.
If not, it'll, you know, it'll set us back and we'll, you know, people will go back to
try to work on to figure it out.
But, you know, but between the time we're recording, between the time we release,
We may even find out whether the cat, the superconducting cat in the box is alive or dead.
That alive or dead state, I mean, these two separate futures is really something that I see, you know, when I was reading your blog, when I was looking at effective accelerationism and accelerationism we'll get to.
But these two futures, I think, is the big question that I want to ask you, which is because you've lived through this time, which is going through the optimism of the 90s, especially, you know, you mentioned Nick Land at the start.
I mean, you see that in philosophy, you see that in technology, see that in the history,
this huge, let's call it a cyberpunk optimism regarding our technological future.
And I would say now, I don't know, you know, whether or not you agree with me, please let me know.
We have entered into what Land himself called a slump from the 2000s, like late 2000s,
you know, early 2000s onwards.
And there seems to be within the air a sort of cynicism.
sort of pessimism that we've just ended up in this like place of stagnance and do you see I mean
if you agree with me in terms of those two two possibilities do you I mean I think I would be right
and saying you're an optimist do you see us now reentering into that a new phase of optimism
regarding technology and regarding the future well so there's there's there's several layers to
this question I would be happy to kind of go through them that we'd spend as much time on this as
you want. But the core layer when talking about, and I totally, by the way, totally acknowledge
and I think this is a great topic and, you know, your observations are very real. The core thing
that I would go to to start with is not kind of the social, political, you know, kind of philosophical
dimension. The core thing I would go to to start with is the technological dimension.
In other words, at the substantive level, like, well, what is the actual rate of technological
change in our world? And you'll note, I don't know, you'll note that on the social dimension,
And we seem to whip back and forth between, oh, my God, there's too much change.
And it's just stabilizing everything.
And then we whip right around to, oh, my God, there's not enough change.
And we're stagnant, right?
And it's horrible.
So there's kind of dystopian versions.
You know, there's kind of dystopian mindsets in the air kind of in both directions.
So anyway, so I would start with kind of the technological kind of substantive layer to it.
And there, you know, the observation, and this is not an original observation on my part.
You know, Peter Thiel and Tyler Cohen in particular have gone through this in a lot of detail
in their work. But, you know, basically, like, if you look at the long arc of technological development
over the course of, you know, basically, which effectively started with the Enlightenment, right?
So you sort of practically speaking, you're starting around 1700 and, you know, projecting
forward to today. It's about 300, 300 years worth of what we would consider kind of systematic
technological development. You know, it's basically, if you look at kind of that long arc,
and then if you basically measure the pace of technological development and a pause by saying,
you actually can measure the pace of technological development in the economy with a metric that economists call productivity growth.
And basically the way that that works is, you know, economic productivity is defined basically as output per unit of input, right?
And you can, you know, whatever your inputs are.
It could be energy, right?
It could be, you know, raw materials, you know, whatever you want.
And then, you know, output is in, you know, actual output, you know, more cars, more chips, more that, more clothes, more food, more houses.
And so basically what economists will tell you is the rate of productivity growth in the economy,
which they measure annually, basically is the rate of technological change in the system, right?
And so if technology is paying off, right?
If the advances are real, then your economy is able to generate more output with the same inputs.
If your technological development is stagnant, then that's not the case.
And it's an aggregate measure, but it's a good measure overall.
If you look at those statistics, basically what you find is we had very,
take more recently in the last century,
we had very rapid productivity growth in the West
basically for the first half of the 20th century.
So from basically, you know,
what was called the Second Industrial Revolution,
which started around 1880, 1890,
through to basically the mid-60s.
We had actually a very rapid rate of technological development.
And by the way, in that era, right,
we got, you know, the car,
the interstate highway system, hit the power grid,
telegraph, telephone, radio, television.
You know, we got computers, we got, you know, we got like all, you know, all we got the, you know, atomic, we got, you know, both atomic weapons and also nuclear power technology, right? And so there was this tremendous kind of technological surge that took place, you know, in that sort of, call it 1880 to 1960, 1965 kind of period. And productivity growth ran, you know, through that era two to four percent a year, which, which in the aggregate is very fast, you know, for the economy overall. Like, that's a very fast pace of change. Basically, since the,
mid-60s, early 70s, the rate of productivity growth basically took a sharp
deceleration. And so in the basically the 50 years, 52 years now that I've been
alive, you know, it's been a step lower. It's been one or two percent a year. It's been kind
of persistently too low relative to what it should be. And, you know, I think there's a bunch
of possible explanations for that. But I think the most obvious one is that basically the world of
technology bifurcated in the 70s and 80s into two domains. One domain is the domain of bits.
you know, the domain of computers in the internet, where there has been, you know, obviously
very rapid technological development, you know, potentially, you know, now culminating an AI.
But then there's also the world of atoms. And, you know, the diagnosis, at least, that I would
apply is we essentially outlawed technological development and innovation in the realm of
atoms, you know, basically since the 1970s. There are many examples of how we've done this. And,
you know, you can look at things like housing policy and you can kind of see it quite clearly.
But also very specifically, you can see it in energy, which is, you know, we discovered
nuclear power, right? We discovered a source of, you know, a limited, you know, zero emissions
energy that, you know, compared to every other form of energy is like ultra safe. You know,
nuclear energy is like by far the safest form of energy that we know of. And, you know,
in the 1970s, we essentially made it illegal, you know, just like totally banned it.
And we talk more about that. But like that, that was like a draconian thing that, you know,
has consequences through to the world we live in today. And so we live in this, and you mentioned
cyberpunk and this is this is actually kind of the the cyberpunk ethos that I think actually
reflects something real which is you know if you're in the virtual world it's like wow right
it's like you know it's amazing like everything is like spectacular and yeah look even like a podcast
like yours like right would have been you know inconceivable 30 years ago right um and so like
information transmission communication um you know all these things are have taken huge
leaps forward but then the minute we you know the minute you get into a car or the minute
you plug something into the wall, right, or the minute you eat food, right, you're still living
in the 1950s. And so I think we live in a schizophrenic world with respect to that question.
Why then, so you write about this in your blog post on AI, which will get to, but you draw in
Prometheus, right, this consistent historical cycle of when there is a new technology, it's
going to destroy us, everything's going to end, it's the worst thing ever, we need to be careful
of it. You know, the TV is going to burn your eyeballs out of your sockets. The vacuum cleaner is
going to, I don't know, like explode or whatever, but every time there is a, like a cyclic change
of a new technological innovation, it's this Promethean thing of we're pretty terrified of it,
and we want it to go away. And then eventually we're like, oh, actually, no, that's pretty
helpful. But there seems to be, as you said, there's something that happened in the 1970s where
we just pushed away the atomic world in favor of the bits, you know, which makes sense.
But why, I mean, there's probably a lot of governmental reasons for this as well, but why were we
so it seems like a fear, really, the way you talk about it.
Like, why were we so, in a way, scared to then develop the atomic world
in the way we had the bit world?
Yeah, so I go start even deeper, I think,
which is there's a deep fear in the human psyche,
and I think probably in the human animal, of new knowledge.
Like, it's even a level, like, technology is an expression of knowledge, right?
Like the Greeks already have this term technique,
which is sort of this, you know, which is where the word technology,
comes from, but I think the underlying meaning is more like general knowledge.
You know, the key to the Christian kind of theology, right, is the, you know, what is, you know, what is, you know, what was the original sin, right?
It was eating the apple from the tree of knowledge, right?
It was mankind learning that which he was not supposed to learn.
And so, you know, the Greeks had the Prometheus myth.
The Christians have the snake in the Garden of Eden and the tree of knowledge.
Like, there's something very, very deep.
like there's there's an asymmetry
I think wire deeply into human brain
right which is you know sort of
you know fear versus hope
which from an evolutionary standpoint
like would make a lot of sense right
which is like okay if you're living in let's say
prehistoric times you know in the sort of long
evolutionary landscape that we lived in
you know is new information likely to be good or bad
probably over the sweep of you know
the billions of years of evolution that we went through
most new information was bad right
most new information was the predators coming over the hill
to kill you
and so I think there's something like deeply resonant about the idea that new is bad
that you know and by the way look in the in the west like we probably you know we we actually
I think from a historical and maybe comparative standpoint like we're actually quite enamored by new
things as compared to a lot of traditional societies and so if anything we've overcome some of
our national instincts on this but that that that impulse is still deep and then if you go up one
level to kind of the social level you know I'm quite bought into an explanation on this
that was provided, there was a, there was a philosopher of science, historian of science
named Elton Morrison at MIT in the first half of the 20th century who talked about this.
And he said, look, you need to think about basically technology intersects with social systems.
When a new technology intersects with a social system, basically what it does is it threatens
to upend the social order, right? And so at any given moment in time, you have a social order,
right, with status hierarchies, right, and people who are in charge of things.
And basically what he says is the social order of any time is basically, you know, in sort of Western, sort of modern sort of enlightenment, Western civilization.
The social order is a function of the technologies that led up to it, right?
And so you have a certain way of organizing the military.
You have a certain way of organizing, you know, industrial society.
You have a certain way of organizing, you know, political affairs.
And they are the consequence of the technologies up to that point.
And then you introduce a new technology, and the new technology basically threatens to upend that status hierarchy and the people who are in power all of a sudden.
aren't and there are new people in power. And of course, you know, what is the thing that people
will fight the hardest to maintain, you know, is their status in the hierarchy? And then he goes
through example after example of this throughout history, including this incredible example of the
development of the first naval gun that adjusted for the role of a battleship in battle,
which increased the firing accuracy of naval guns by like 10x. It was one of the great
decisive breakthroughs in modern weaponry. And it still took both the U.S. and the U.K. British Navy's
25 years to adopt it because the entire command status hierarchy of how naval combat vessels
were run and how gunnery systems worked and how tactics and strategy worked for naval battles
like had to be upended with the invention of this new gun. Anyway, and so like he would basically
say, you know, essentially, duh, you know, you roll out this new technology. It, you know,
it causes people who used to have power and no longer have power, puts new people in power.
You know, in modern terms, you know, the language that we would use to describe this as gatekeepers, right?
Like, so, you know, why is the traditional journalism press so, you know, it's just absolutely
furious about the Internet, right?
And it's because, like, the Internet gives, right, regular people the opportunity to basically
be on at least a peer relationship, if not, you know, in the case of somebody like Joe Rogan,
a superior relationship, right?
And then it's an upending of the status hierarchy.
And kind of, you know, the same thing, you know, through, through, basically, like, one of the
ways to interpret the story of our time from a social standpoint is all of the gatekeepers
who were strong in the 60s and 70s are basically.
being torn down. I'll give you another obvious example, political parties, right? Why are so many
Western political parties in a state of some combination of freak out and meltdown right now, right?
It's because in an era of radio and television, they were able to broadcast a top-down message
and they were able to tell voters basically what to think. In the new model, voters are deciding
what they think based on what they read online, and then they're reflecting that back up and finding
their politicians wanting, right? And so therefore, like the re-rise of populism and, you know,
sort of the blowing out of, you know, sort of both left-wing and right-wing, it,
right the sort of you know the center is not holding um as anyway that would be another example in
morrison's framework um and then and i'll just closing this morrison has this fascinating he says there's
as a consequence of the fact that technology changes social hierarchies um he says there's a predictable
three-stage process to the reaction to any new technology by the status quo right by basically the
people in power at that time um he says uh step one is ignore um so just like pretend it doesn't
exist uh which by the way is actually a pretty good strategy because like most technologies
don't upend social orders.
Like most new technologies
don't work
at the time
that they're first presented
so maybe ignore
it's actually a rational strategy.
Step two is what he calls
rational counter argument
and so that's where you get
like the laundry list
of all the things
that are wrong
with the new technology, right?
And then he says
step three is when the name calling begins.
This, I mean,
I've watched a couple of your other interviews
recently and this relates to,
I know you've been talking about
Nietzsche's master in slavery
morality recently
and this seems to tie
to that in this notion of Nietzsche and
it does a typical philosophical
thing of taking a French word
and drawing it out, but raison tamon, right?
Instead of just having
a look at nuclear power and seeing where it would go
and allowing that power to unfold
within society,
you try invert the morals
so you say, well actually the good thing to do
is because these people don't have the
will to power, because they don't have the ability
or the engineering skills, I guess in your own case
to like, you know, to utilize the thing,
they invert the morals and say,
well, actually the good thing is to do the inverse,
is to not have it.
Like, this is bad, and now that then immediately puts them in the good camp.
But it seems like, to be honest,
it really feels, especially with AI,
and also now with nuclear power,
now that, you know, especially in Germany,
certain things have been tried,
and now it's like, okay, this was a really bad mistake
in terms of energy.
Like the cats are out of the bag,
and like there's now this,
force of having to move. You were then talking about the second and third stages there. It's
almost like, look, with AI especially, the cats out of the bag, like, we have to move. There's
no, there's no choice of like ignoring or reacting against it now. You either deal with it or you
don't. Yeah, so let's spend a little, one more moment of nuclear power and then they go to
AI. So nuclear power is so interesting because nuclear power is the tell. Like, I always look for
like the little signals that people don't really mean what they say or that they're not really like
their, their, their sort of, you know, their, their, their, uh, their sort of, you know, moral system doesn't
quite line up properly. And so nuclear power is this like amazing. It's this amazing thing.
It's like literally, it's like, okay, you build this thing. It generates power. It basically
it generates a small amount of nuclear waste. It generates steam. But it generates zero emissions,
right? Zero carbon, right? And so you have this basically, this amazing phenomenon where you have
this, and let's just take them completely a space value. I'm not going to, this is not me questioning.
I'm not going to question carbon emissions or global warming. I'm just going to, I'm going to assume
that everything the environmentalists say about carbon emissions, climate change, all our stuff.
Let's assume that that's all totally real.
Like, let's just grant them all that.
It's like, okay, well, like, okay, so how can you solve the sort of climate crisis,
the carbon emissions crisis?
It's like, well, you have the silver bullet technology you could roll out in the form
of nuclear efficient today.
You could generate limited power.
Richard Nixon, by the way, the heavily condemned Richard Nixon in 1972, you know, proposed
something at the time he called Project Independence.
Project Independence was going to be the United States building a thousand new
civilian nuclear power plants by the year 1980 and cutting the entire U.S. energy grid,
including the transportation system, cars, everything, home heating everything over to nuclear
power by 1980, going zero emission in the U.S. economy. And by the way, right, geopolitically
removing us from the Middle East, right? So no, right, no Iraq, Afghanistan, all this stuff,
like, just completely unnecessary, right? And, you know, you'll note that, like,
project independence did not happen, right? Like, we don't live in that world today. And so
it's like, okay, you've got this like crisis, you've got this like silver bowl solution
for it, and you very deliberately have chosen to not adopt that solution. And it's like,
and it's actually very interesting split in the environmental movement today. And it's really
kind of, you know, I think kind of bizarre. And it's like a 99 to one split. You ask like 99%
of environmental activists about nuclear power, they just just sort of categorically dismiss
it. Of course that's not an option. You do have this kind of radical fringe with people like
Stuart Brand, who are like basically now pointing out that it is the silver bullet answer,
but most of them are saying, no, it's not an answer. And it's like, okay, well, why are they
doing that? It's like, well, what is it that they're saying that they want to do? And what
they're saying they want to do is what they call, you know, degrowth, right? And so they want
to decarbonize the economy. They want to de-energize the economy. They want to de-grow the
economy. And then, you know, when you get down to it and you ask them a very specific question about
the implications of this, you know, basically what you find is the general model is they want
to reduce the human population on the planet to about 500 million people. You know, it's kind of,
kind of the answer that they ultimately come down to. And so, so ultimately the, you know,
the big agenda is to, is to reduce the human, you know, basically the human herd, you know, quite
sharply. And, you know, they kind of dance around this a little bit, but when they, when they really
get down to it, this is what they talk about. And of course, you know, Paul Ehrlich, you know, is kind of
one of the kind of famous icons of this. He's been talking about this for decades. I think it was
Jane Goodall, who used the 500, you know, a million, you know, number recently in public. And so,
And so then you've got this kind of very interesting, you know, technological, philosophical, moral question, which is like, well, what is the goal here, right? Is the goal to, like, solve climate change or is the goal to, like, depopulate the planet? Right. And to the extent that, like, free unlimited power, right, would interfere with, you know, to the extent that that's a problem, the problem it would be as if the actual agenda is depopulate the planet. And, like, I would like this to not be the case. Like, I, you know, again, taking everything else that they say at face value, you'd like to solve carbon emissions and
climate change and everything else. But like, you know, like, I think you, you know,
you might also say you want a planet in which there are not only eight billion people,
but maybe, you know, maybe people are good, right? Maybe you're actually should have 20 billion
or 50 billion people. And we have the technology to do that and we're choosing that to do it.
So, so, so this is the thing. Like this gets into these very deep questions, right, to your point
of like, okay, very deep questions about morality and like how did we maneuver or, you know,
like per Nietzsche, like how did we reverse ourselves into a situation where we're actually
arguing against human life.
And of course, and this is, we'll get to it, but this, of course, is then, you know, a big part of the origin of the idea of effective accelerationism, which is basically new.
Like, let's go sharply in the other direction.
Oh, and then, yeah, so AI, yeah, AI is playing out much the same way.
A.A. is already playing out the same way. And here you've got this like just incredible phenomenon happening where we, you know, it looks like we have a key breakthrough to basically increase the level of intelligence, you know, basically all throughout society and around the world, you know, through, you know, for basically for the first time, you know, directly applying new general intelligence to the world. And, you know, there is this like incredibly, basically aggressive movement that is actually having tangible impact today in the halls of power in Washington, D.C. and in the EU and other places.
you know, that is seeking to stop and reverse it, you know,
as aggressively as they possibly can.
And so we're kind of, we're going through,
we're going through, I would say, a suddenly accelerated
and very sharp and aggressive version of exactly what happened
nuclear power happening with AI right now.
I mean, this is the thing that can, can, well, there's two questions
because on your blog, it's really refreshing to see.
You're pretty to the point when you say, look,
AI is code.
It's code written by people, by human beings,
on computers, developed by human beings.
you know, like we're in control. You're not
of this. I think
there was, you know, Musk signed a big
thing where like, you know, a thousand
people signed this thing to say like, we need to
hold this, the whole Rocco's Basilisk
AI is going to be Terminator 2
coming and blowing us up with robots,
it's going to kill us all. You're very much
like, no, this is code, this is just an intelligence
for us to use.
Now that's one question, you know, I guess
why isn't AI going to kill us all?
And I know you've spoken about that a lot, so that
answer can be brief. But secondly, this whole
idea of trying to reverse it, to me it seems inherent within AI as a thing that it, once,
you know, it's the cats out of the back, you can't. Like, once it's here, you, you, outside of
really draconian measures, you, you can't, because how do you, how do you hold an intelligence,
which is growing, right? Well, except, you know, they did stall out nuclear power, right? So,
right, like, so they did, like, it worked. So why did Project Independence not happen? Why do we not
have like, you know, unlimited nuclear power today. You know, the reason is because it was it was
blocked by the political system, right? And so, so, you know, Richard Nixon, who I mentioned,
you know, proposed this. He also created the Environmental Protection Agency and the Nuclear
Regulatory Commission. You know, the nuclear, it's actually, this has actually been a big week.
The first new nuclear power plant design, the first newly designed nuclear power plant in the last
50 years just went online in Georgia, you know, threat $20 billion over budget. And, you know,
it's a story of its own, but at least we got one online. It's the, it's the story of its own. But at least
we got one online. It's the first new nuclear power plant design ever authorized by the
Nuclear Regulatory Commission, says Nixon created that commission, right? And so, so we put in place
a regulatory regime around nuclear power in the 1970s that, you know, all but made it impossible.
By the way, you alluded to the Germany thing earlier, I'll just touch on that for a second.
So, you know, I'm sure you've heard of the idea of the precautionary principle, right?
Right, which is this idea that basically scientists and technologists have a moral obligation
to think through all the possible negative consequences of a new technology.
before it's rolled out. The precautioner, right, the precautionary principle, and we could talk about that, including whether scientists and technologies are actually qualified to do that. But, but, you know, this was also a central theme of Oppenheimer. But, but the precautionary principle was invented by the German Greens in the 1970s, and it was prevented specifically to stop nuclear power. And, you know, it is just amazing. We're sitting here in 2023 and there's this, you know, we effectively, we in the West are effectively at this, you know, at war with Russia.
Right. And, you know, it's a proxy war right now that, you know, hopefully doesn't turn into a real war, but who knows, you know, the proxy wars have a, you know, have a disconcerting, you know, pattern of spilling over into becoming real wars. And, you know, a lot of this is, it's a tale of energy. And, you know, basically the Russian economy, you know, is like 70% energy exports, right? Oil and gas exports. The major buyer of that energy historically has been Europe and specifically Germany.
you know, Europe and Germany specifically essentially have funded the Russian state,
the Putin state, you know, and that funding is what basically built and sustains their
military engine, which is what they've used to invade Ukraine.
Right. And so it's this like, like there's this counterfactual, right,
where the German Greens did not do what they did in the 1970s.
Nuclear power was not blocked. You know, Germany and France and the rest of Europe today is like
fully energy independent running on nuclear power, you know, the Russia state would be greatly weakened
because the value of their exports would be, you know, enormously diminished.
And they would not have the wherewithal to invade other countries or to threaten Europe.
And so, like, these decisions have, like, real consequences.
And, you know, these people use the pejorative sense, like, they are so confident that they can step into these, you know,
debates, you know, kind of questions around, you know, new technologies and how they should be applied and what the consequences are.
They can step in and they can use the political machine to basically throw sand in the gears and stop these things from happening.
So like AI, this is what's happening at AI right now.
So like, you know, in the sort of, you know, theoretical position where AI is kind of this, you know, potentially runaway thing, then, right?
Maybe it can be constrained.
Like, in the real world, it very much can be constrained.
And the reason it can be constrained in the real world is because it uses physical resources, right?
It has a physical layer to it.
And that layer is energy usage and that layer is chips and that layer is, you know, telecom bandwidth.
and that layer is data centers, physical data centers, right?
And so, and that layer is like, you know, by the way, that layer also includes the actual
technologists like working in the field and their ability to actually do what they do.
And there are, you know, a very large number of sort of control points and pressure points that,
you know, the state can put on those layers to prevent them from being used for whatever it
wants to prevent.
And, you know, and look, the EU is on the verge.
The EU has this like anti-AI bill that it looks like is going to pass that is.
is like extremely draconian and may result in in Europe not even having an AI industry
and may result in, you know, American AI companies not even operating in Europe.
And then in the U.S., we have a very kind of similar push happening as, you know,
the sort of, what I would describe is the anti-AI zealots are, you know,
they are in the White House today, right, arguing that, you know, this is bad, it should be stopped.
And it's like, you know, it's amazing because it's like, in the white,
like, how many times are we going to like run through this loop?
How many times are we going to like repeat history?
here. How many times are we going to be kind of self-defeating like this? And like, apparently
the impulse to be self-defeating, we have not worked it out of our system.
You don't want to be self-defeating, though. I mean, let's move into this peculiar
four letters, which is found at the moment at the end of your Twitter name and the end
floating around Twitter, mostly E slash act, or effective accelerationism.
And this is just beautiful to me. It's like the accelerationist renaissance. I've been
talking about it in that way. I don't want to gatekeep it too much.
but, you know, I wrote my master's thesis on accelerationism.
Like, I love it. I love talking about it.
You don't want any of this holding back.
You don't want to hold anything back.
You want to accelerate.
So, firstly, I mean, there's two questions there.
What is it for you to accelerate?
And what is effective accelerationism?
Yeah, so let me just say where it came from.
I'll reverse, I'll answer the second one first
and then go to the broader topic.
So it's a combination.
There's, you know, kind of two words they're effective and accelerationism.
So, you know, the accelerationism part of it is obviously building on what you've talked about and what Nickland and others have talked about for a long time.
And, of course, as you've talked about, there's all these different versions of accelerationism.
And so this is, you know, proposing one that, you know, this one is like the closest to what you would call right accelerationism, although, you know, maybe without some of the political overtones.
And so there is that component.
There's also the effective part of it.
And the effective part of it, it's sort of a half humorous reference, obviously, to effective altruism.
And it's a little bit tongue-in-cheek because it's like, of course, if you're going to have a philosophy, of course, you would like it to be effective.
But also look, like EAC is like very much like EAC's enemy, right, the oppositional force that or the thing that EAC was sort of formed to fight is actually, you know, specifically effective effective.
Right. And so it's, so it's also like you also sort of, you know, use that term to the term effective to kind of kind of make that point.
Like this is in that world and this is opposed to that.
And the reason why, like, this is happening now, like the reason why the concept of effective
accelerationism, you know, has kind of come into being. And by the way, the people, this is not
originally my formulation. This is this, this is, there's these, you know, kind of ultra smart
Twitter characters who I think are still mostly operating under assumed names. But there's
Beth Jesus and Bayes Lord are the two of them that I know. And they're, you know, these are like
top and Silicon Valley.
you know, engineers, scientists, technologists, but, you know, at least for now,
they're operating kind of undercover pseudonym. So the reason this is happening now is because
of what I was describing earlier with AI, which is you have this, you have this other movement,
you have this movement of what's sort of called, sometimes use different terms, AI risk,
AI safety, AI alignment. Sometimes you'll hear the term X risk, you know, sometime, and then
this is sort of directly attached. This is all part of the, you know, EA world, the effect of
altruism world. Um, and then, you know, the, the central characters of, of, of this other world are,
you know, Nick Bostrom, Elyzer Yutkowski, um, you know, the open philanthropy organization, um,
you know, a bunch of these, a bunch of these kind of, you know, the, it's sort of the AI,
what we call the AI Dumer's, uh, running around. Like the, the, the, the, the, the, the,
the, the, the, the, the Dumer movement is basically part and parcel with the effective altruism
movement. Um, and, and you know, AI existential risk has always been kind of the boogeyman of
effect of altruism, kind of going back, um, you know, over the 20-year development of
EA. And so anyway, that EA movement is the movement, by the way, with lavish funding by
like EA billionaires, which is part of the problem, by the way, who made all their money in
tech, which is also amazing. But, you know, so you've got this funding complex, you've got this
EA movement, you've got this attached AI risk safety movement, and now you've got like active
lobbying, you know, sort of anti-AIPR campaign. And so anyway, so effective, effective accelerationism
is intended to be the polar opposite of that.
It's intended to be the, you know,
to head boldly and firmly and strongly and confidently into the future.
You know, it's like why, you know, why this form of positive accelerationism,
you know, there's a couple different layers of it.
The founders of the concept of VAC have a thermodynamic, you know, kind of thing,
which we could talk about, but it's kind of one layer down from where I operate.
The layer operate is more at the level of engineering.
And when I think about it, I think in terms of essentially fundamentally of material
conditions. So human flourishing, quality of life, standard of living of human beings on
earth. And back to that concept of productivity growth, you know, the application of technology
to be able to cause the economy to be more productive and therefore cause more material wealth,
higher levels of material welfare, you know, for people all over the world. By the way,
also with reduced inputs, right? And so not just greater levels of development and greater levels
of advance, but also greater levels of efficiency. And the nature of technology is a lever
in the physical world is you can't have the UK can eat it too.
You can get higher levels of output with lower levels of input.
And the result of that is a much higher standard of living.
So I kind of adopt my philosophical grounding is sort of, you know,
I don't know what I've been called like a positive materialism or something,
you know, which is like I think the thing that we,
the thing that the technology industry does best is improve material quality of life.
I think that we should accelerate as hard into that as we possibly can.
I think the quote unquote risks around that are greatly exaggerated, if not.
if not false.
And, you know, I think the forces against basically technological progress, you know,
they're like the environmental movement I described.
They're, you know, they're fundamentally, at some deep level, they're sort of anti-human.
You know, they want fewer people and they want a lower quality living on Earth.
And like I just, I very much disagree with both of those.
And what is this at the thermodynamic level?
Is this the, you know, our ultimate enemy is entropy?
So there's a cut.
The thermodynamic part gets complicated, and this is not my,
my field, so there's other people that you should probably have on to talk about this. But
the effect of accelerationism version of the thermodynamic thing is based on the work of this
physicist named Jeremy England, who is this very interesting character. He's actually trained
by one of my partners and is now basically, he's an MIT, you know, physicist, you know, biologist. And
by the way, and also, by the way, interesting guy. I don't know him, but a very interesting guy from a distance.
he's also a trained rabbi.
And so he's an interesting cat.
And so he basically has this theory that basically,
basically it's sort of life is the direct result.
Life, like the phenomenon of life itself
is a direct consequence of thermodynamics.
And, you know, the way he describes it is basically,
basically if you take, you know, basically the universe
with a level of energy that's washing around and raw materials
and you sort of apply kind of natural selection
at a very deep level, you know,
you know, even at the level of just like the formation of materials, like on a planet or something,
you basically have this thing where basically a matter wants to organize itself into states
where it's able to absorb energy and achieve higher levels of structure.
And so you have absorption of energy, you have achievement of higher levels of structure.
In the case of organic life, that starts with RNA and then kind of works this way up to, you know,
full living systems.
And then on the other side of that, as we talked about before, on the other side of that is the result of that,
is you're sort of you're dumping heat, which is to say entropy, you know, kind of out into the broader
system. And so it's almost like saying the second lot of thermodynamics has it upside, right,
which is basically, yes, entropy in the universe is increasing over time, but a lot of that increases
the result of structures forming that are basically absorbing energy and then exporting entropy.
And one form of that structure is actually life. And this is actually a thermodynamic,
you know, biomechanical,
bio-electrical kind of
explanation of actually
how organic life works.
Like this is what we are.
We are machines for gathering energy,
you know,
forming increasingly, you know,
complicated biological machines,
replicating those machines, right?
And of course, you know,
he talks about like, you know,
natural selection,
like it's not surprising
that natural selection
is so oriented around replication,
right?
Because replication is the easiest way
to generate more structure,
right?
Like, replication is the way
that a system that is basically
in business to generate structure.
It's the way
that it can most efficiently
generate more structure.
And so, so anyway, basically, the universe wants us to basically be alive.
The universe wants us to become more sophisticated.
You know, the universe wants us to replicate.
You know, the universe feeds us and, you know,
and essentially a limited amount of energy and raw materials with which to do that.
You know, yes, we dump entropy out the other side, but we get structure and life, you know,
to basically compensate for that.
The universe is a, the universe is pro-natalist and kind of niche.
there as well. Yeah, exactly. 100%. Yeah. So anyway, so that's the thermodynamic. That's the thermodynamic
underpins of effective accelerationism. The people who have encountered effective acceleration
effective accelerationism, some of that get very, some of them get very deeply into that,
and there's a very deep kind of well there to draw from. This guy, Jeremy England, has a book out.
Actually, you'll appreciate this. This guy, Jeremy England has a book out, and the title of the book is
something like, Every Life is on Fire. And it's actually funny because it's like, if you read Heraclytis,
you're like oh my god you know he saw it um right like there's something very very deep going on here
with this sort of intersection of energy of life um but so he's he's got this book out which apparently
is quite good um and so some people in effect of acceleration kind of kind of go deep there's a tongue-in-cheek
reference to the so-called thermodynamic god right which is not you know which is not a literal
you know religious in the literal religious sense like a you know sort of a conscious god or a sentient god
a bit more of this idea that the universe is sort of designed
to express itself in the forms,
you know, basically in higher and higher forms of life.
Yeah, to your point, like, there's an obviously direct Nietzschean connection.
You know, so maybe he saw a lot of this too.
You know, and obviously he was obviously writing and thinking
at the same time Darwin was figuring a lot of this out
on the National Selection evolution side.
Yeah, so there's that.
But having said that, like I said, my take on it is more, you know,
I find that stuff fascinating.
I'm more naturally inclined as an engineer
more naturally inclined towards the material side
and so I just more naturally think
in terms of the social systems
and the technological development
and the impact on material quality of life
and so I think you can also just take it at that level
and not have to get all the way down
into thermodynamics if you don't want to.
I mean there's an odd...
I mean, yeah, drawing it down to this level of engineering
well not down to, but just to this level of engineering
there's this odd learned helplessness
and just to take the two examples we've given so first
and they work quite well, actually,
nuclear energy on the atomic side
and AI on the bit side of things,
virtual, I guess virtual reality and reality.
You posted this really interesting essay on your blog
about availability cascades,
which is about basically, in short,
if I'm getting this right,
this idea of why are so many people
interested in this thing
or this view of whatever the opinion
or the idea is that's floating around?
And it seems on both of those,
both nuclear engine and AI, we have that same opinion which is like memetically infected
culture of a sort of learned helplessness. Like, oh no, you know, we've already spoken about this
a bit, but like, oh no, we need to get rid of this, we can't deal with this. But it seems,
do you think on the engineering side of things, and I guess it overlaps also into the social
in terms of how you engineer and how you promote these ideas socially as tools, as things
that people use, is an attempt to like invert that availability cascade. And like,
try to, like, begin some memesis on the side of, like, it's okay to want a better quality
of living. It's okay to want to grow. It's okay to want energy. Like, you don't have to be
almost, like, submissive to whatever this strange, self-defeating learned helplessnesses
that we have in terms of technology and our, like, our, like, weird allegiance to just this,
this stagnant comfort that we've had for too long.
Yeah, that's, that's right. That's exactly right. That's exactly right.
Right. And like I said, like we talked about earlier, like there's, there's this, I think there's a natural human impulse deeply wired into like the limbic system or something, which is basically, right, fear over hope, right? You know, like what's most likely to come over the ridge, right? A saber to tiger to eat you or like something warm and cuddly that wants to be your friend, right? I guess a quacka or something like that. Right. So, right, it's probably the tiger, right? And you know, there's a sort of, you know, false positive, false negative, right? Two ways of making mistakes. And you definitely, from an evolutionary standpoint, want to err in the direction of being, you know,
more, you know, more, you want to overestimate the rate of cyber two tigers, right,
um, uh, to, to, to, to survive. So, so, so, so that, that impulse is deep. Yeah.
And, but then, you know, what we have is, you know, we have, we're not just Olympic systems
anymore. We have a, we have a, we have the ability to control our environment, uh, the ability
to build tools. We're not afraid of two tigers anymore. Um, and so, um, yeah, we, we have
the ability to shape our world. Um, you know, we develop rationality and the
enlightenment and science and technology and markets and everything else to be able to control
the world, you know, to our benefit. And so, you know, we don't, we don't have to live
cowering and fear, um, anymore, uh, you know, as much as, or as much as that might be like
grimly satisfying. Like, we, we don't actually have to do that. And there's actually a,
you know, very, there are many, many good reasons over the last 300 years to believe that, you know,
there's, there's a much better way to live. Um, yeah, but look, somebody has to, you know,
somebody has to actually say that. Um, and then, look, I think the other part is, um,
I think there's a big divide. Um, I think there's a big divide between, I'll, I'll,
pull out my burn on this a little bit is a big divide on this stuff between what you
describe as the elites and the masses that has turned out to be pretty interesting.
So I would say this problem, this problem of fear of technology or hatred of technology
or desire to stop technology, I think it's primarily a phenomenon of the elites.
I actually don't think it's particularly shared by the masses.
And it just seems like, I'll just take AI as an obvious example.
One of the amazing things about AI is it's like freely available.
for use by everybody in the world right now today, fully state of the art. Like, the best
AI in the world is on, you know, websites from OpenAI and Google and Microsoft. And you can go on
there and you can use it for free today. And people, you know, hundreds, 100, 100, 200 million people,
something like that around the world are already doing this, right? And if you talk to anybody who's,
if you talk to any teacher, right, you know, they'll already tell you they've got students using
ShagipD to write essays and so forth. Right. And so you've got this amazing thing where,
you know, like the internet before it
and like the personal computer before it
and like the smartphone before it, AI is
it's like immediately democratized.
Right? Like it's immediately available
in its full state of the art version.
Like there's no more advanced version
of like GPT that I can buy
for a million dollars than you can get
for free or by paying 20 bucks for the upgraded version
on the open AI website.
Like the state of the art stuff is fully available for free.
And so you have people all over the world.
And this is one of my, this would be a source of optimism that the AI Dumers are going to lose, almost by definition, right, is you have people all over the world who are just already using this. And they're getting, you know, great value out of it. And they're in their daily lives. They're having a huge amount of fun with it. You know, it's great. They're making new art and they're, you know, doing all kinds of, you know, asking all kinds of things. And it's helping them in their jobs and at school and everything else. And they love it. So, so I think there's this thing where, like, I actually think that what we, what we're actually talking about from a social standpoint is basically essentially a corrupt elite, right? A corrupt elite. A corrupt. A corrupt.
oligarchic elite that basically has been in a position of gatekeeping power, you know,
for, you know, basically in its modern form for 60 years. And every new technology development
comes along is a threat to that. And back to the Morrison thing like that, that's why they hate
and fear new technology. You know, they would very much like to control it. You know, it's like social
media. Like they're all just like completely furious about social media, but like, you know,
three billion people use social media every day and they love it. And so it's only the elites that
are constantly kind of raging against it. The problem is the elites,
are actually in charge, right, from a formal, you know, government, like, they actually have
the ability to write laws. By the way, you also see this in polls. If you poll on, like, there's two very
interesting kind of phenomena kind of unfolding if you do these broad-based polls on trust in
institutions, and there's organizations Gallup in particular, and then there's another organization
called Edelman that does these polls every year of basically, essentially, they poll regular
people in the question is, like, which institutions do you trust? And that, the institutions
here includes everything from the military to, you know, religion, to schools, to government,
to, you know, journalism, to, you know, big companies, big tech, and so forth, small business.
And basically the two big themes you see in those polls is one is,
ordinary people trust in institutions, trust in any sort of centralized gatekeeping function,
it's been in basically secular decline, basically since the 1970s,
corresponding to the period we've been talking about.
And so generally, people as a whole have kind of had it with the gatekeepers.
which is very interesting.
And by the way, that phenomenon, actually the beginning of that predates the internet
and social media.
And so that traces back to the early 70s.
And so I think that, which I think is not an accident, it's where the current regime
basically essentially took control.
And then the other thing that's so striking is that, you know, although you can sit
and read the news all day long and where they just like hate on tech companies all day
long if you do the poll of, you know, basically businesses by category, tech polls by far at the
top. And so, again, ordinary people are just like, wow, my iPhone's pretty cool. I kind of like
and this strategy PT thing seems really nifty. And so I do think there's this weird, like I do think
there is this, this aspect of this where, like, it's a cliche to say the elites are out of touch.
Of course, the elites are out of touch. The elites are always out of touch. But like it seems like
the elites are particularly out of touch right now, including on this issue. And another way kind of
through this not whole is, you know, they may just simply discredit themselves.
Like the EU is a great example. The EU may pass this anti-AI law, and the population of Europe
might just be like, what the hell? Right. And so that would be another white pill against what
otherwise looks like a deep kind of drive in our society for stagnation. It would also be
really strange to try and find a way to define AI in that sense, because it's not like we haven't been
we haven't been using it in a minor form before all this for a while, right?
so I don't know how they'd go about defining that in that way.
Yes, so, yes, so do you ban linear algebra, right?
Do you ban linear algebra?
And it's actually really funny because I don't know this.
There actually is a push underway to, quote, unquote, ban algebra.
And it's literally in California.
There's a big push underway to drive it out of the schools in California.
So there's a big push.
First it started with a push to drive calculus out of the schools in California,
and now it's extended to drive algebra out.
And, of course, this is being done under the so-called rubric of equity, right?
because it turns out, you know, test scores for advanced math, you know, vary by, you know, very, very by a group.
And so, you know, there's this weird thing where, like, in California, we're trying to push algebra out of the school.
In Washington, we're trying to push algebra, like, out of tech.
Like, the whole thing is, and this is where I get, like, really, you know, this is where I start to get emotional.
Because it's, like, really, like, we spent, you know, 500 years climbing our way out of, you know,
privativism and getting to the point where we have, like, advanced science and math, and we're literally going to try to ban it.
I was involved. I was involved, if you remember this, there was actually a similar push like this.
There was a push in the 1990s to ban cryptography, right, to ban the idea of codes, right, and ciphers, right?
And as you probably know, like, goats and ciphers are just math. Like, all they are is math, right?
And there was a move in the 1990s for people who thought that cryptography, obviously, you know, there's all these anti-cropography arguments because, like, bad guys can use it to hide and so forth.
And so there was this like concerted effort by the U.S. government
and other Western governments to bankruptography in the 90s.
And it took us years to fight and defeat that.
And I was like, okay, that was so stupid.
That will certainly never happen again.
And like we're literally back at trying to ban math again.
Well, that does lead me just to the final question here,
which is to do with the future.
I mean, whether or not it's your optimistic, pessimistic,
in relation to, you know, I guess it would draw in
and what we've just been talking about there.
How do you envision the short-term future,
which I've put down here like 10,
to 50 years and then what do you foresee for the year 3,000 a.D. Oh, boy. So I should start by saying
I'm not a utopian. So, you know, we talked a little bit earlier about kind of these impulses that
kind of drive people to these kind of extreme points of view. Like the way I think about it is like
there's a natural drive. A lot of people, a lot of people have what Thomas Sol called the unconstrained
vision so that they've got these kind of very broad-based kind of visions. And, you know, those visions kind of
then split into like a utopian vision.
And that might be, you know, for AI,
that might be something like the singularity, right?
Or in the 1990s, these were called the extropians, right?
Which is sort of this idea of kind of a material utopia
as a consequence of like AI and, let's say, nanotechnology,
on the one hand.
And that's where, by the way, the idea the singularity came from, right?
Which is Ray Kurzweil and Verner Vinge.
They were like, at some point you get this kind of, you know,
point of no return, which is like a utopian point of no return.
But then, of course, the flip side of every utopia,
is, you know, Apocalypse.
And so then that's where the sort of AI, you know, sort of the singularitarians 20 years ago
have become that, you know, a lot of them have become the AI Dumers of today.
And they, you know, they have sort of, you know, they have the same utopian impulse.
They've just flipped a bit and made it negative.
So I should say, like, I'm not one of those.
I'm probably more of a materialist and a little bit more of a, like I said, an engineer
where, you know, things, for example, have constraints in the real world.
So I don't think we tend to get the extreme, quite.
the extreme outcomes, but I do think we get, you know, we get change. Like, we get, we get change, we get
change in the margin and then, you know, change in the margin that compounds over time can
become, you know, quite, quite striking. Um, so look, over 10 to 50 years, you know,
look, sitting here today, like, if we want it, um, you know, you can imagine the next 50 years
to be characterized by, you know, the rise of AI. Looks like we kind of figured that out now. Um,
you know, this superconductor thing, if it's real, that's a, you know, turning point moment. Um,
and by the way, if it's not real, it may be it, you know,
that this result points us in the direction of something that becomes real in the next few years.
And so you can imagine some combination of AI, superconductors, you know, biotech, you know, all these new techniques for, you know, bio-optimization, gene editing, you know, and then, you know, nuclear, you know, if we get our act together on nuclear efficient, by the way, there's a lot of really smart people working on nuclear fusion right now.
You know, fusion is, you know, would be an even bigger, you know, kind of opportunity for unlimited clean energy.
me. You know, now, you know, my cynical, the cynic in me would say, if fission is illegal,
then they're certainly going to make fusion illegal. But, you know, that, you know, that's a
choice. We all get to decide whether we want to live in a world where fusion is illegal. So,
you know, we get nuclear fusion. And so sitting here 50 years from now, you know, we basically
are like, wow, you know, we are like we have, you know, we are all much smarter than we were
because we have these smart machines working with us and everything. You know, we have
solve whatever environmental problems we thought we had um you know with we have a we have abundant
energy in an in an increasingly clean environment um you know we're curing diseases at a at a rapid
pace um and you know new babies are born that are immune to disease um and so you know you know not
not not quite a material utopia but like a you know a significant you know meaningful step
function upgrades in in human quality of life like i think that that's all very over a 50 year
period for sure like that that's all very possible um for a three thousand over whatever the year
3,000, over a thousand year period. I mean, look, you, you do get into these questions,
you know, you do, if you're going to talk about a thousand years, like, you do get into
these questions of like, you know, for example, merger of man and machine, right? So you do have
to, over that time frame, you have to start thinking about things like, you know, the neural
link, you know, like where neural link takes you. And, you know, you know, over that period
of time, you know, you'll definitely have like, you know, neural augmentation. So, you know,
do you have shifting definitions of humanity? You know, where is the transhumanist movement actually
taking us. You know, it becomes a very interesting question over that time frame. Obviously,
you have lots of questions over that time frame of space, you know, exploration, getting to other
planets, you know, either other life in the universe or not other life in the universe. So kind of
the spread of the, you know, the spread of our civilization more broadly. You know, so there,
you truly get into science fiction scenarios. You know, then, yeah, that's it. I'm always fun to talk
about, I will admit, I am much more focused on the next 50 years. Yeah, I mean, is there anything
you'd like to add into the conversation
that you feel, you know, is key
that we haven't touched upon?
Yeah, no, I think that's a good,
I think that was a good,
it covered a lot of ground.
Yeah, so for effective accelerationism,
just if you Google,
there's a number of good already websites
and substacks talking about that.
A lot of the conversations happening on Twitter.
So, and I already dropped the names of the EAC guys.
So Beth Jesus and Bayeslor
definitely follow those guys.
I've not met Nick Land
but I would definitely give a shout out
and say for anybody who hasn't encountered his work
they should definitely read up on it
he is I think pretty clearly
like the philosopher of our time
and not even because whether I agree or disagree
with him on everything he said and of course
he's changed his views and a lot of things over time
but just the framework
that he operates like
is willingness to actually
go deep and actually think through
the consequences of the kinds of technologies
that I deal with every day are just, I think, way beyond most other people in his field.
And so it's, and I know he took kind of a long road to get here, so it's fun to see.
You know, it's fascinating to read that.
Oh, I'll point to one other thing.
So I already mentioned the Jeremy England book, and I'll point to one of the book that people
might find interesting.
So a lot of Lance work and a lot of accelerationism right is based on these, the ideas
of this field called cybernetics, which is kind of this, it's cybernetics is interesting
because it's kind of this lost field of engineering.
it was super hot
as an engineering field
from the 1940s to the 1960s
and it basically was
sort of the original computer science
and then it was sort of
it was also sort of the original
artificial intelligence
a lot of the AI people in that era
kind of called themselves
cybernetics or cybernetices
but it really is an engineering field
that kind of went away
or got a lot more sedate
after the 60s
but but as I mentioned
like a lot of the ideas around AI
and you know
in a world of
machines and thermodynamics. A lot of those ideas were being explored as far back
as the 30s and 40s. So the cybernetics people of that era thought a lot about a lot of
these questions. Anyway, there's this great book. There's a lot of original source material
on this. And the key character of that movement was Norbert Weiner and there's a bunch of books
by him and about him. But there's also a great book came out recently called Rise of the
Machines by an author named Thomas Ridd. And it sort of reconstructs the archaeology of
cybernetics and sort of makes clear how relevant those ideas are today. And so if you read
in conjunction with Nick Land's work. I think you'll find it pretty interesting.
I'll be sure to put the link for your Twitter and your blog in the description below as well.
But yeah, I think that's a good place to finish up. Mark Andreessen. Thanks very much.
Good. James, a pleasure. Thank you.
Thanks for listening to the A16Z podcast. If you enjoyed the episode, let us know by leaving a review
at rate thispodcast.com slash A16Z. We've got more great conversations coming your way.
See you next time.
This information is for educational purposes only and is not a recommendation to buy, hold, or sell any investment or financial product.
This podcast has been produced by a third party and may include paid promotional advertisements, other company references, and individuals unaffiliated with A16Z.
Such advertisements, companies, and individuals are not endorsed by AH Capital Management LLC, A16Z, or any of its affiliates.
Information is from sources deemed reliable on the date of publication, but A16Z does not guarantee
its accuracy.