a16z Podcast - a16z Podcast: Brains, Bodies, Minds ... and Techno-Religions
Episode Date: February 23, 2017Evolution and technology have allowed our human species to manipulate the physical environment around us -- reshaping fields into cities, redirecting rivers to irrigate farms, domesticating wild anima...ls into captive food sources, conquering disease. But now, we're turning that "innovative gaze" inwards: which means the main products of the 21st century will be bodies, brains, and minds. Or so argues Yuval Harari, author of the bestselling book Sapiens: A Brief History of Mankind and of the new book Homo Deus: A Brief History of Tomorrow, in this episode of the a16z Podcast. What happens when our body parts no longer have to be physically co-located? When Big Brother -- whether government or corporation -- not only knows everything about us, but can make better decisions for us than we could for ourselves? That's ridiculous, you say. Sure... until you stop to think about how such decisions already, actually happen. Or realize that an AI-based doctor and teacher will have way more information than their human counterparts because of what can be captured, through biometric sensors, from inside (not just observed outside) us. So what happens then when illusions collide with reality? As it is, religion itself is "a virtual reality game that provides people with meaning by imposing imaginary rules on an objective reality". Is Data-ism the new religion? From education, automation, war, energy, and jobs to universal basic income, inequality, human longevity, and climate change, Harari (with a16z's Sonal Chokshi and Kyle Russell) reflect on what's possible, probable, pressing -- and is mere decades, not centuries, away -- when man becomes god... or merges with machines.
Transcript
Discussion (0)
Hi, everyone. Welcome to the A6 and Z podcast. I'm Sonal. And we're very honored today to have as our special guest, Yuval Harari, who teaches at the Department of History and the University of Jerusalem and specializes in macro history and the relationship between history and biology. He's the author of Sapiens, which is a mind-bogglingly good book and now has a new book just out, Homo Dioos. Did I pronounce that properly?
I use the Latin pronunciation with Homo Deus.
Deus, okay. But you can say homo dues. I say the really bad.
like non-accent dais.
That was great.
That, by the way, was Kyle's voice, who's also joining us on this podcast.
He's on the deal in investing team and covers a lot of the technology like drones,
AI, and a bunch of other stuff.
So just to get things started, we talk a lot about innovation and technology.
And I've always wondered what's the simplest definition of technology and innovation.
And reading your book, Sapiens in particular, and then Homo Deus, the thing that really struck
me is that technology is the greatest accelerator humankind, in fact, our entire,
evolution of all the species on Earth has ever seen because it allowed us to essentially
bypass evolutionary adaptations where we could become seafarers without having to grow gills
like a fish, for example. And so that is an incredibly powerful idea, but that's non-directional.
And given that your new book and your work essentially, the first phase was talking about
us organic history of our species. And your new book is shifting to a more inorganic version.
I'd like to hear what drove that shift. Well, I think that so far for thousands of years,
humans have been focusing on changing the world outside us and now we are shifting our focus
to changing the world inside us. We have learned how to control forests and rivers and other
animals and whatever, but we had very little control over what's happening inside us,
over the body, over the brain, over the mind. We could stop the course of a river,
but we could not stop the body from getting old. If a mosquito,
annoyed us, we could kill the mosquito.
But if a thought annoys us, we don't know what to do about it.
Now we are turning our innovative gaze inwards.
I think the main products of the 21st century will be bodies and brains and minds.
We are learning how to produce them.
And as part of that, we may also, for the first time, not only in history, for the first
time in evolution, the evolution of life, we may learn how to produce non-organic.
life forms. So after four billion years of evolution of organic life forms, we are really on the
verge of creating the first inorganic life forms. And if this happens, it's the greatest revolution
in the history of life since the very beginning of life. You know, what do you mean by inorganic
life forms? Because in your book, you draw a distinction between biological cyborg and non-organic.
I recently be like living in a network. Is that our identity then? Is that who we are? Like, what do you see?
It could be something that exists only in cyberspace.
I mean, you hear a lot of talk about uploading consciousness into computers
or creating consciousness in computers.
It could be life forms in the outside world,
but which are not based on organic compounds.
It can go any of these ways, but the essential thing is
it's no longer limited by organic biochemistry.
Evolutionist psychologists, biologists talk a lot about our hands
and the formation of our hands as tools.
One thing that's happened to me, anecdotally, is as I use my mobile phone more and more,
my hands, muscles have literally atrophied to some extent.
Like, I know, I know this because I started taking notes again instead of on my phone to be polite in meetings.
And my handwriting is literally, I used to win awards for handwriting.
And now it's like, it's like chicken scratch.
But it's much more extreme because for four billion years, all parts of an organism had to be literally in the same place for the organism to function.
Oh, right, like physically co-like, a single entity.
I mean, if you have an elephant, the legs of the elephant must be connected to the body of the elephant.
If you did touch the legs from the elephant, it dies or he can't walk.
Now, with inorganic life, there is absolutely no reason why all parts of the life form must be at the same place at the same time.
That's my mind.
It can have, you know, it can be dispersed over space.
This is something that for four billion years was unthinkable.
And it's just around the corner.
We're essentially already uploading ourselves into the cloud, online social networks, the worldwide web.
That's actually replacing writing as a major artifact.
That's our new collective history.
One of the consequences of that is it changes the dynamics of what becomes real and not real.
And it reminds me of this famous story from Ray Bradbury called The Velt,
which basically is a story where there's a virtual world that these two kids sort of enter.
and they end up killing.
And you ask a similar question in the book.
You give the anecdote of Jorge Borges' short story a problem
and the story of Don Quixote.
It sort of is this blending of delusion and reality.
The question is what happens when our illusions collide with reality?
And with humans and human history,
you see more and more that our fictions and illusions are more powerful,
becoming more and more powerful.
Less than say fake news,
this is a big debate
that's playing out right now
in the United States.
You know, with fake news
when with all this idea
of the age of post-truth,
I would like to know
when was the age of truth.
That's my question.
I totally agree with you.
Was it the 1980s?
What was in 1930s?
It never existed, right?
I mean, as far back in history
as you go,
what kept humans together in society
is belief in shared illusions
and shared fictions.
Realities or imagine borders.
Like when you swear the U.S. president to office, he swears on the copy of the Bible.
And even when people testify in court, I swear to tell the truth, the whole truth and nothing but the truth, they swear on the Bible.
Which is so full of fictions and myth and error.
It's like you can swear on Harry Potter just the same.
Some people do.
Some people do.
That's true.
For thousands of years, human society have been built on shared fictions.
and showed illusions, and there is nothing new about that.
It's just that with technology, actually our fictions and illusions become more powerful than
ever before.
And visible to, I think, one another.
One of the illusions that you talk about being broken down by the advancements in science
and technology is the illusion that we're all individuals.
Free markets and capitalism is the idea that there's a bunch of products that appeal to you
as an individual, and they try to put those individuals into buckets and market towards
them.
And actually, it turns out that scientific breakthrough show that actually there isn't just this kind of one individual you that accumulates through all of your experiences.
Your brain is just kind of spitting out a lot of things.
Maybe it's deterministic.
Maybe it's random.
Maybe it's probabilistic.
But you don't necessarily have control over that.
And so if you don't have control over the desires that your brain is spitting out the random thoughts, how much of any of that is actually you?
And so what are the implications of that?
I think what we are seeing is the potential breakup of the self, of the individual.
The very word individual means literally something that cannot be divided.
Indivisible.
And it goes back to the idea that, yes, I have all kinds of external influences and my neighbors and my parents and so forth.
But deep down, there is a single indivisible self, which is my authentic identity.
And the way to make decisions in life is just forget about all these external disturbances
and try to listen to yourself, try to connect to yourself.
And the idea is we just need to do whatever this inner voice tells you to do.
But science now tells us that when you look inside, you don't find any single authentic self.
You find a cacophony of different conflicting voices, well, none of which is,
your true self. There is just no such thing. And if in the 20th century, the big fear for
individualism was that the individual will be crushed from outside. And now the threat comes
from the opposite direction. The individual will break up from inside. And then the entire
structure of individualism and democracy and free market, it all collapses with the individual.
all collapses with the self?
Or just one alternative possibility,
because this is actually what struck me most
when reading Sapiens and then reading Homo Dios afterward,
is that the big theme of Sapiens
was this great unification of humankind
and being able to collect people into empires, nation states,
outside of these sort of hunter-gatherer tribes.
And now, when I look at what's happening
because of this mass coordination online,
you're now seeing this return to tribalism
in some ways, I would argue.
Well, that's like what the value of shared illusions are, whether it's religion or the idea
that we've got this free market system, but some safety net to keep it all functioning
and keeping anyone from being exploited.
The point of having that shared ideology or that shared illusion is you get to pretend
that we all care about the same thing that we're all coordinated.
Right.
Now, though, because of the internet, you can actually identify what the same thing is
at a very micro-targeted niche level in a way that was unprecedented.
and no longer where you were born, to your point, physically located.
It could be now your political beliefs.
It could be your belief about, you know, if you're a fan of Harry Potter, are you
Slytherin or Gryffindor?
Like, it could be any of those things.
And people collect and new tribes.
And I find that's fascinating because you do see sort of this return to the past, not in
a pastoral way, but you're seeing this coming full circle.
Like, you know, the Industrial Revolution created adolescence.
Are we going to go back to a world where you don't need adolescents again?
You needed banking credit.
Are we going to go back to a world where because of online algorithms and new information
sources, you don't need that version of a credit score. You can go back to this trusted
personal manager who essentially knows what he needs to know in order to invest in you as
a risk. So I always wonder in this context of this is another thing to think about, not
just as an individual level, but sort of a return to tribalism, especially lately.
The present stage of a neo-nationalism or tribalism, I think it's just a phase.
It's a backlash against globalization. And the main problem, it doesn't have any solutions
to the deep problems of the 21st century.
All the major problems of humankind
in the 21st century are global in nature.
It's climate change and global warming,
it's global inequality,
and above all, it's technological disruption.
I mean, the implication of the rise of AI
and bioengineering and so forth,
you cannot solve any of these problems
on the national level.
The nation is simply the wrong framework for that.
And therefore, I don't think that nationalism
really has relevant answers to the problems we now face.
So I don't think that nationalism is our future.
I think looking further to the future,
what we will see with regard to the individual
is that at a certain point, external entities,
whether it's corporations or whether it's governments,
they will have enough data, especially biometric data.
and enough computing power to be able to understand me better than I understand myself.
Very soon, Facebook or the Chinese government will be able to do that.
And once you have an external entity, an algorithm, that knows me better than I know myself,
this is the real turning point.
This is the point when individualism, as we've known it, doesn't make any sense,
when democracy in the free market become completely obsolete
and we need fundamentally different models
for running the world and for understanding what's happening.
Right. For now several hundred years,
the market as a mechanism for saying what our opinions or our desires really are
has been the most efficient mechanism.
We could best allocate production towards things that people find valuable
because they're voting with their dollars.
But if you can accurately say, based on this person's heart rate, what they're paying attention to, how they react to particular inputs, whether it's an advertisement or some new way of interacting with things based on new technologies like VR, you could know the closest thing to the underlying motivation and desire, even better than the person themselves maybe would.
But at the other side of it, there's an example you give, and this goes back to the topic of like free will and individualism, lab rats that have electrodes hooked up to.
the reward centers of their brain where you have them navigate a maze or climb ladders and
go down in little shoots by basically stimulating the reward center and it basically influences
that rat's desire. It doesn't feel like it's being coerced into doing that activity.
It's like, oh, wow, I'm really into the idea of climbing this ladder right now. This is awesome.
The rat race.
And so what's interesting is markets as efficient as they are, like part of how they worked was
this idea of like marketing to like instill desires.
car ads giving you this vision of being on the open road and free and wind blowing in your hair
and then at some point the desire pops up at a time when you can act on it, you buy a car.
Whereas the future state that you describe is imagine you had a headset that was like a miniaturized fMRI
that can detect exactly where the parts of your brain would need to be stimulated to make you really want to play the piano right now
so that you'll be motivated intrinsically to learn it.
You could basically like sell the idea of being into this.
And so being able to read your desires, but also being able to show.
shape your desires? What do you think the interaction of those two look like?
We don't know. I mean, the basic tendency is to think in 20th century terms that they'll try
to manipulate us. Right. And this is certainly a danger, but intellectually, it's the less
interesting option. That, okay, they'll use it to advertise in a different way, to shape our
desires without even our knowing it, which they've been trying to do for decades. They'll have
better tools of shaping our desires.
The deeper and more interesting question is,
what if Big Brother can really do a better job than the individual
in understanding what you want and what you need?
Because many people discover during their life
that they don't really know what they want,
and they often make terrible decisions in the most important decisions of their lives,
what to study, where to work, whom to date, whom to marry.
What happens if you have an external entity that makes these decisions for you better than you can?
It starts with very simple things like choosing which book to buy.
If the Amazon algorithm really picks books that you are very happy with,
then you'll gradually shift the authority to choose the books to Amazon.
And this may happen with more and more fields in your life.
And the really interesting question is not if they try to manipulate.
you. The really interesting question, what if it works? Oh, that's such an interesting
question. What does it mean to being a human being, to being a human being when all the
decisions in your life are taken by somebody else who really knows who you are? It's like being
a baby forever. It's already working on some level because you might have a million other movies
out there, but you really don't care because you only care about what's in the Netflix
catalog because you're looking for convenience of being able to binge watch and get it on demand. And
the moment. So it's already reshaping that cultural landscape. I mean, it's already happening in some
extent. I think the big breakthrough will come with biometric data. So for most of these algorithms,
whether it's Amazon or Netflix or whatever, they work mainly on the basis of data external to my
body. They follow me around in space, see where I go, which places I visit, they see my likes and
dislikes on Facebook, what do I buy, and so forth. But the real breakthrough will come when they start
receiving more and more data from biometric sensors on or inside my body. Right, like quantified
cells, wearables. Yeah, I read a book and Amazon knows exactly what is the impact of every
sentence I'm reading on my heartbeat, on my blood pressure, on my brain activity. This is really where
you can see how an external system
can make better decisions for you.
Today, the systems are basically reflecting ourselves back at us.
You look at products and because of cookies
when you go elsewhere on the web, it's like, oh, I see that thing again.
Like, it's just being reflected back at me.
Same thing with your Netflix queue.
I gave certain star ratings to certain things.
It's reflecting that same pattern back at me.
Exactly.
With recommendations.
Something that's interesting to me is the idea of mapping concepts
in a feature space using deep learning
and then basically projecting it in different forms.
And so the idea of tracking what your eyes are looking at, what's keeping your attention, what makes your heart rate get up, what makes your eyes dialate while you're reading a book, you can imagine as you're reading it being reformatted and communicating in different ways because they know this different way will reach you better and you'll be more receptive to it.
And so it might not necessarily be what feels coercive to us, a system of plugging an electrode into your brain and saying, now you're going to care about reading history.
It's going to say, here is the optimal way to present history to this specific individual.
This is especially being explored in new educational methods, an AI teacher that studies you while it is teaching you and adapting to your particular strength and to your particular weaknesses.
Also breaking down all the traditional limitations and barriers of modern education, I mean, modern education takes place in school.
And you have this division, there is school and there is real life.
school. And also in school, now if you have considered you have a single AI mentor that follows
you around everywhere, your whole life, 24 hours a day connected to biometric sensors on your
body, and there is no longer any division between school and life. There is no history teacher
and mathematics teacher. You have the same teacher for both. And you don't have to be part of a
group like there are 30 other kids in the class. Basically, an AI assistant where it's
constantly in Socratic debate with you.
Kids are inclined already to say like, okay, but why?
Okay, but how?
Okay, but why?
And they keep digging kind of deeper until you as a parent or a teacher are just like,
because it is, okay?
Whereas an AI system, assuming it's mapped out like the entire canon of human philosophy
and knowledge, basically just keep going.
Even if it doesn't go all the way to that extent,
you could have a huge increase in productivity of, you know, education just by like providing
those kinds of tools to kids.
Mass personalization.
I mean, I come from the world of.
of developmental psychology and education.
And the Holy Grail has always been this idea
of mass personalization to be able to customize.
But I want to make two points.
One, I agree with this idea.
Vygotsky had this idea.
It's a constructivist way of learning.
You're constructing, you're learning your world.
And that's how you learn these concepts
in a very fundamental way.
And it's really ironic because educator
has been trying to fake that in the school setting
for years by Montessori methods
and all these other Reggio Amelia.
Because of this false artificial divide
between real life and school.
The flip side, however, and I don't think we can ignore this,
is that there is a social element to why school matters,
a socialization component that has arguably nothing to do with education
and where there is shared learning and collaboration
and the interaction of students.
And so I wonder what this means for that.
Well, you can have it outside school as well.
You're saying there's no distinction between school anymore.
Exactly.
And it doesn't have to be limited.
Okay, all your friends are the same age as you.
There is no reason why the group with which you socialize in school, everybody have to be the same age.
Well, that actually is another way that technology brings you back to the past.
Because if you think of Little House on the Prairie, the schoolhouse was essentially all the grades in a single school because of physical location.
But you're arguing that those boundaries, that the idea of the idea of the idea of the idea of the idea of the idea of the idea of the idea of the idea of the idea of the idea of the large set of inputs crank out like some modified.
set of outputs that fulfill some need.
Well, the question that I have for you guys,
and especially given sapiens and the themes of Homo Dios,
is what do humans have to believe in order to make this reality continue happening?
Do they not have any agency in any of this?
Because it sounds like we're almost talking about like, you know,
these uploaded brains in a vat.
And is there any sense of coordination consciously?
Is there a new religion?
I used to watch Star Wars as a kid.
I remember thinking of myself because I grew up Hindu.
and you learn a lot about all these Hindu, you know, gods and goddesses.
I remember thinking, this reminds me a lot of like hearing about the Mahabharatha and all these other things happening.
Anyway, I would argue that science fiction is like religion for a lot of people.
But what do people have to believe in this new world?
Like, what is their religion?
Is there what?
I mean, you make the argument about like data is in your religion, but that sounds to me more of a, of a something that's there versus something that people are choosing, like creating new myths and gods around actively.
I think we are seeing.
and we will see more and more the rise of kind of techno-religions,
religions based on technology,
that make all the old promises of traditional religions,
they promise justice and happiness and even immortality and paradise.
But here on earth, with the help of technology,
there already has been one very important techno-religion in history,
which is socialism.
Oh, I never thought of that that way.
Which came in the 19th century with the Industrial Revolution,
And what Marx and Lenin basically said, we will create paradise on earth with the help of technology, steam engines and electricity and so forth, when Lenin was once asked to define communism in a single sentence, the answer he gave was communism is power to the workers' councils plus electrification of the whole country.
You cannot establish a communist regime without industrialization.
It's based on the technology of the Industrial Revolution,
electricity and steam engines and so forth.
And the idea is we'll use this technology to create paradise on Earth.
It didn't really work very well.
So now I think we will see the second wave of techno religions.
Now we have genetics and now we have big data and above all we have algorithms.
The al-salvation paradise will come from the algorithms.
You talk about in the book the idea that the more you commit or sacrifice on behalf of your ideology or religion, the more you buy into it because you have this sunk cost.
And so the idea of sacrificing a goat or a cow to a god made you buy more into it because I can't have like spent the last eight seasons sacrifice some goats and have it been for nothing.
And so looking forward then, we're hitting some kind of productivity cap as normal humans.
that autonomous machines and systems are going to beat us.
So we have to sacrifice our own humanity to increase our own productivity and augment ourselves.
You can almost see the emergence of some kind of powerful ideology.
The religion of the 21st century onward is we are the gods.
This is actually an old idea.
Humanism, which goes back to the 18th century, even 17th century,
is saying humans are the gods.
Are humans of the source of all meaning and authority?
Everything you expected previously from the gods
to give legitimacy to political systems, to make decisions in ethics,
humanism comes and say the highest source of authority in politics is the voter.
The highest source of authority in economics is the customer.
The highest source of authority in ethics is your own feelings.
Humans are the gods.
Now we are entering a post-humanist era.
Authority is shifting away from humans.
if in the last 300 years
we saw authority descending from the clouds
to earth to humans
now authority is shifting back to the clouds
but not to God but to the Google Cloud
to the Microsoft Cloud
the basic idea of this
if you want new religion or new ideology
is again that's given enough data
and enough computing power
an algorithm can understand me better
than I understand myself
and make decisions for me
In the end, religion is about authority.
The basic question of religion, where does authority comes from?
And the answer of the 21st century, authority doesn't come from human.
Authority comes from data and from data processing.
There is also an underlying new ontology.
What is the world?
What is reality?
In the end, reality is just a flow of data.
Physics, biology, economics.
It's all just a flow of data.
We're just computers interpreting some fraction of reality.
There are all algorithms as a connective tissue of everything, from biology to computers to everything.
I have a quick question for you here.
What does this mean for the future of the firm work?
I would love to hear your thoughts on the universal basic income debate that's playing out around the world right now because that's essentially people opting out of the rat race in some arguments.
I think we need new economic models in place for the moment when AIR.
and robots and so forth may push more and more humans out of the job market.
And we might see the creation of a new class of people who are not just unemployed, but unemployable.
And at present, the best idea so far that people managed to come up with is universal basic income.
The problem there is that we don't really know what universal means and we don't really know what basic means.
Right. And where the income comes from, but that's another sidebar.
Well, let's say you tax and use the proceeds to give people universal basic income.
Now, then the question is, what is universal?
What do we see the day when the U.S. government taxes the profits of Google in the U.S.
and uses it to pay people in Bangladesh or Mexico who lost their jobs?
So this is the first issue of universal because now the economy is global.
And a disruption of the economy, say by the rise of AI, will really,
require a global solution.
And most people who think about universal basic income,
they think in national terms, universal for them means U.S. citizens.
The other problem is what is basic.
Basic human needs keep changing all the time.
We are beyond the point when basic needs meant food and shelter.
And the problem is that humans are biased towards looking at examples that are based on who you know.
It's hard to see kind of that level of UBI pulling off.
it feels like people's expectations
would be much higher
depending on where they are
and what life they've already lived.
The basic problem
is that people's expectations
keep changing.
Usually they grow.
As conditions improve,
expectations increase.
And therefore,
what you see
is that even though
the conditions
over the last centuries
of most humans
have improved quite dramatically,
people are not
becoming more satisfied because their expectations also increase. And this is going to continue
in the 21st century. Yeah, I have a question here because, you know, in Sapiens, you said
something that I thought was very profound when I read it, which is that the agricultural
revolution was actually one of the greatest frauds ever perpetrated on ourselves. And so if you
think about this shift from agricultural revolution to industrial revolution to now essentially
information revolution, what's the fraud that we're perpetrating on ourselves now? Where does
meaning come from? Because I think the thing that people often forget to address when they
talk about the universal basic income and, you know, future of work debate is, is this idea of
meaning. And does that even matter? Ressless people tend to pick up the pitchforks.
Right. Exactly. Exactly. Because it also goes to your points. And this is a universal theme that we have
to address on some level of further entrenching inequalities. That's an important thing to think about.
There are two different problems. I mean, first you have inequality. And once more and more people know
longer work, they depend on, say, universal basic income, then they have no way of closing the
gaps. They depend on charity on whatever the government is able or willing to give them,
and you just don't see any way in which they can close the gap.
That's if they're dependent on it, because it can also be something that's supplementary
to something else you do.
I'm thinking in terms of what happens if, again, AI pushes more and more humans out of the job
market, so they rely on universal basic income.
and it provides whatever it provides,
but if they want more,
they just have no way of getting more.
So this kind of entrenches inequality.
And if you add to that biotechnology in bioengineering,
you get for the first time in history
the potential of translating economic inequality
into biological inequality.
Yes.
If you look back at history,
let's say the Hindu caste system,
people imagined that the Brahmins are superior, they are smarter, they are more creative,
they are more ethical, but at least as far as scientists today are concerned, this wasn't
true. It was all imagination. Right, that was not true at all. It was not true. It wasn't true
that the son of the Brahmin or the son of the king was biologically more capable,
smarter, more creative, whatever, than the son or daughter of a simple peasant. However,
in the 21st century, it might be possible for the first time.
to translate economic inequality into real biological inequality.
And once this starts, it becomes almost impossible to close the gap.
So this is one problem of a rise in inequality.
Another problem is the question of meaning that even if you can provide people with food
and shelter and medicine and so forth, how will they find meaning in life?
for many people, the work, the jobs provide them with meaning,
I'm doing something important in life.
A mission, I believe in this.
Yeah.
So one of the answers, some experts say,
is that people will just play games most of the day.
They'll spend more and more time in virtual realities
that will provide them with more meaning
and more excitement and emotional engagement
than anything in the real.
Real reality.
Everyone just lives in there, perfectly optimized for them, holodeck.
Exactly.
Because you're freed from the constraints of the physical realities.
Yeah, and you get your meaning from the game, from the virtual reality game.
And in a way, you can say, oh, this is nothing new.
It's been happening for thousands of years.
It's simply being called religion.
I mean, religion is a virtual reality game that provides people with meaning by imposing imaginary rules on an objective.
reality. You play this game that you have to collect points. If I eat non-cosher food,
I lose points. And if by the time I die, I gather enough points, then I go up to the next
level. I mean, in Hinduism, karma is essentially this great game of collecting and
subtracting points across multiple lifetimes. Exactly. So really quickly, this goes back to the
automation kind of question and, you know, potential future. If you look back at kind of the
Industrial Revolution, where humans as mechanical actors, just imbueing something with value by
acting on it with their hands or bodies with agriculture, that became less important as
using animals and then machines, we're able to do that same task much more efficiently.
Now humans are valuable because they're knowledgeable operators of that machine.
As part of the Industrial Revolution, the shift to services led to this idea that we're not
just investing in capital, we're investing in human capital.
We're making people smarter so that they're better at their jobs.
And now with AI systems, suddenly again, you can just kind of buy knowledge capital as this thing that can be dropped in.
Okay, an argument here.
AI is how humans remain valuable is, well, we're still social animals.
We still are better than any machine at interpreting how all other people are thinking about this and, you know, assuaging fears or, you know, whatever it is that where the power of empathy is what humans will bring to the table.
An interesting point you make is actually how humans accomplish that task, a doctor giving bad news about a cancer diagnosis.
They are looking at the physical way that a person is moving their facial muscles, how their tone changes, how their voice cracks as they feel a certain emotion.
And if you look, that's actually just pattern recognition, which is exactly what deep learning is good at.
And so is that even an advantage humans are going to have or computers going to be much better at looking not just at those exact same features that humans can, but also like zooming in on the eyes and looking at dilated pupils and guessing at heart rate by looking at someone's wrists or chest, what are humans going to be?
good at. What should people be investing in for, you know, the future to come? Yeah, what happens
when human capital becomes commodified? We don't really have an answer. Yes, many people, when
they reach that point, they say, okay, we'll invest in social skills, in empathy, in recognizing
emotions, the emotions are like the last, the last frontier. But the thing is that emotions are
not some spiritual phenomenon that God gave. No, they're electrochemical, just like everything else.
Emotions are a biochemical phenomenon.
There are biological patterns just like cancer.
When your doctor wants to know how you feel, he or she basically recognize patterns in two kinds of data, as you mentioned.
It's what you say and actually the tone of your voice, even more important than the content of what you're saying.
And secondly, your body language and your facial expression.
When my doctor looks at me at the clinic, she doesn't know.
what's the level of my blood pressure at the moment?
She doesn't know which parts of my brain are activated right now,
but an AI potentially will be able to know that in real time using biometric sensors.
It will have much better sources of data coming from within your body.
So their ability to diagnose emotions will be better than the ability of most, if not all humans.
So what will humans do?
We don't know.
nobody really has an idea, a good idea of how the job market would look like in 30 or 40 years.
We'll have some new jobs, maybe not enough to compensate for all the losses, but there will be new jobs.
Problem is, we don't know what they are.
Because the pace of change is accelerating, it's very likely that you will have to reinvent yourself again and again and again during your lifetime if you want to stay in the game.
Right. When you don't have premature death anymore,
and you live your full life or you even have extended longevity through technology,
you can reinvent yourself like 10 times until you're 100.
The basic idea for thousands of years was that human life is divided into two periods.
In the first period of life, you mostly learn, you learn skills, you gain knowledge.
And then in the second part of your life, you mostly work and you make use of what you learned earlier.
This is now not going, it's going to break down.
By the time you're 50, what you learned as a teenager is mostly irrelevant.
It's already true right now.
So now, you know, again, thinking about autonomy, you know, we're already seeing the shift
towards smaller militaries with really advanced equipment and fighter jets and we're going
to see robots on the battlefield.
As humans become less valuable economic actors, as they become less necessary to fight for power
at kind of that scale, how does that factor into, you know, the extent?
extension or lack thereof of, you know, political agency.
Most people today have absolutely no military value.
In the 20th century, the most advanced armies relied on recruiting millions of common soldiers to fight in the trenches.
Now they rely increasingly on small numbers of highly professional soldiers, super warriors, all the special forces and so forth.
Sergically targeted.
And they rely increasingly on sophisticated and autonomous technological.
like drones and robots and cyber warfare.
So you just don't need people militarily as before,
which means not only that they are in danger of losing their political power,
but also that the government will have a far smaller incentive investing in their health and education and welfare.
Maybe the biggest project and achievement of most states in the 20th century was to build these massive systems.
of education and health and welfare.
And you see this not only in democracies,
but also in totalitarian regimes.
But if you don't need them as soldiers or workers,
then the incentive to build hospitals and schools and so forth diminishes.
In a country like, I don't know, Sweden,
I think the traditions of the welfare state
and the social democracy will be strong enough
that the Swedish state will continue to invest in the education.
and health of most of the people there,
even if there is no military or economic necessity.
But if you think about large developing countries,
it's much, much more complicated.
If the government doesn't need tens of millions of Nigerians
to serve as soldiers and workers,
maybe it will not have enough incentive
to invest in their health and education.
And this is very, very bad news
for most of the human race, which lives in places like Nigeria and not in places like Sweden.
And so what's the best course of action to follow?
If that's the case, is it make sure that the most inclusive institutions possible are in place before that transition happens?
We don't have enough time, I think, that we are not talking in terms of centuries.
We are talking in terms of decades.
And once the transition takes place, especially in the civilian economy, in the military, it already happened.
We are there.
In the civilian economy, maybe we have 20, 30s, 40s, nobody really knows.
It's a very short time.
If we don't have a workable model by the time the transition is in high gear, then it's going to be both extremely difficult situation for the majority of people in the world and the social and political implications are going to this.
stabilize the whole world, including the first world.
You talked in your book, your new book, a lot about how there's three types of capital
that raw materials and energy, but people have ignored a third type, which is knowledge.
And my question from just an economic perspective is how does this tie into how we think
about growth, especially given what you just talked about, this need to enlarge the pie
in order to avoid war and violence.
It's often thought that there is a limit to the potential growth of the economy, because
there is a limit to the amount of
energy and raw material we have access
to. But this is, I think, the
wrong approach. We have a third
kind of asset, which is knowledge.
And the more
knowledge you have, the more
energy and raw materials you also
have, because you discover new
sources of energy and new sources
of raw materials. I don't think
that we are going
to bump into a limit
in terms of, oh, there is not enough
oil in the world. There is not enough
coal in the world. This is not the problem. The problem is probably going to come from the direction
of climate change and ecological degradation, which is something very different. People tend
to confuse the two problems, not enough for all materials and the problem of climate change,
but they are very different. I actually wanted to probe about this because in Sapiens,
one of the things that you talked about was how we've had waves of climate change throughout the
entire history of our planet. And one of the, and I'm by
I am no climate change denier by any means, but I can't help but ask a question if, you know, whether we're the cause or it's a cyclical effect, what it means for what the next cycle of change will be?
Because the one thing that came through loud and clear was how every wave of climate change has brought about a corresponding change in human evolution.
Well, there certainly have been many periods of climate change before, but it does seem that this time it's different, that this time it is caused to at least certain degree.
by human action and not by some natural forces like play tectonics or ice ages or things like
that. And the potential impact for human civilization and for most other organisms on the
planet is catastrophic. So, you know, it could be both natural causes and human causes
at the same time. It doesn't make it any better. It just makes it worse. The effects are the effects,
Right. In your book, you have this beautiful quote, which I thought was really straight articulation. Modernity is a deal. All of us sign up to this deal on the day we're born and it regulates our lives until the day we die. Very few of us can ever rescind or transcend this deal. It shapes our food, our jobs, and our dreams. And it decides where we dwell, whom we love, and how we pass away. And I want to know if you have any parting thoughts for people whose lives are being shifted by some of the technological change.
Since the main theme has been technology and the future of technology and its impact on society and politics,
I think that my closing thought is that technology is never deterministic.
You can build very different kinds of political and social systems with the same kind of technology.
You could use, you know, the technology of the Industrial Revolution,
the trains and electricity and radio, you could use them to build a community,
dictatorship or a fascist regime or a liberal democracy.
The trains did not tell you what to do with them.
In the same way, in the 21st century, we'll have artificial intelligence and bioengineering
and so forth, but they don't determine a single outcome.
We can use it to build very different kinds of societies.
We can't just stop technological progress.
won't happen. It's inevitable. But we still have a lot of influence over the direction it is
taking. So if there are some future scenarios that you don't like, you can still do something
about it. Yeah. Well, thank you so much for joining the A6 and Z podcast. If people have not
already read Sapiens, they must read that, and especially the new book, Homo Deus, a brief history
of tomorrow. Thanks for coming in. We really appreciate your time. Thank you.