The Diary Of A CEO with Steven Bartlett - Yuval Noah Harari: The Urgent Warning They Hope You Ignore, “More War Is Coming”, Yuval’s Chilling Future Predictions!
Episode Date: January 11, 2024If you enjoy hearing about the potential impact of AI on humanity, I recommend you check out my conversation with ex-Google office, Mo Gawdat which you can find here: https://www.youtube.com/watch?v=...bk-nQ7HF6k4 He has shown millions of readers how humans have evolved to where we are now, but what does the future hold for us as a species? Yuval Noah Harari is a best-selling author, public intellectual and Professor of History at the Hebrew University of Jerusalem. He is best known for his bestselling books, ‘Sapiens: A Brief History of Humankind’, ‘Homo Deus: A Brief History of Tomorrow’ and ‘21 Lessons for the 21st Century’. His books have sold over 45 Million copies in 65 languages. In this interview, Steven and Yuval discuss everything from how AI will change everything, the importance of language and stories, why the idea of finding a ‘soulmate’ is a myth, and the ongoing battle for human attention. You can pre-order the 10th anniversary edition of ‘Sapiens’, here: https://bit.ly/48JVQ6c Follow Yuval: Twitter: https://bit.ly/3HdUxR7 Instagram: https://bit.ly/41WLbCT YouTube: https://bit.ly/3vyAwm0 Follow me: https://beacons.ai/diaryofaceo
Transcript
Discussion (0)
Quick one. Just wanted to say a big thank you to three people very quickly. First people I want
to say thank you to is all of you that listen to the show. Never in my wildest dreams is all I can
say. Never in my wildest dreams did I think I'd start a podcast in my kitchen and that it would
expand all over the world as it has done. And we've now opened our first studio in America,
thanks to my very helpful team led by Jack on the production side of things. So thank you to Jack
and the team for building out the new American studio. And thirdly to to Amazon Music, who when they heard that we were expanding to the United
States, and I'd be recording a lot more over in the States, they put a massive billboard
in Times Square for the show. So thank you so much, Amazon Music. Thank you to our team. And
thank you to all of you that listened to this show. Let's continue.
We are now in a new era of wars. And unless we re-establish order fast, then we are doomed.
Yuval Noah Harari, one of the brightest minds on planet Earth, historian, a best-selling author,
of some of the most influential non-fiction books in the world today.
I think we are very near the end of our species,
because people often spend so much effort trying to gain something without understanding the consequences.
For example, we will get to a life where you can live indefinitely, but realizing that you have a
chance to live forever. But if there is an accident, you die. The people who will be in that situation
will be at a level of anxiety and terror unlike anything that we know. Then you have artificial intelligence
and the world is not ready for it.
It's the first technology in history
that can make decisions by itself
and take power away from us
to hack human beings, manipulate our behavior
and making all these decisions for us or about us.
Whether to give you a loan, whether to give you a mortgage,
dating us,
shaping your romantic life.
But the real problem is that increasingly the humans at the top could be puppets
when the most consequential decisions
are made by algorithms.
Global financial decisions, wars.
This is extremely dangerous,
but it's not inevitable.
Humans can change it.
But with what's to come,
are you optimistic about the future? I'm very worried about two things. First of all...
Yuval, I have three of your books here. And these are three books that sent a huge tidal wave,
a ripple through society. With these books and with all of the work that you're doing now,
with the lectures you give, the interviews you give, what is your mission? What is the sort of,
if I was to be able to summarize what your collective mission is with your work, what is
that? It's to clarify and to focus the public
conversation, the global conversation, to help people focus on the most important challenges
that are facing humankind. And also to bring at least a little bit of clarity to the collective
and to the individual mind. I mean, one of my main messages in all the books is that our minds are like factories that constantly produce stories and fictions that then come between us and the world.
And we often spend our lives interacting with fictions that we or that other people created with completely losing touch with reality.
And my job, and I think the job of historians more generally, is to show us a way out.
Inherent in much of your work is what feels like a warning.
And I've watched hundreds of videos that you've produced
or interviews you've done all around the world,
and it feels like you're trying to warn us about something,
multiple things.
If my estimation there is correct, what is the warning?
Much of what we take to be real is fictions. And the reason that fictions are so
central in human history is because we control the planet and rather than the chimpanzees or
the elephants or any of the other animals, not because of some kind of individual genius that
each of us has, but because we can cooperate
much better than any other animal.
We can cooperate in much larger numbers and also much more flexibly.
And the reason we can do that is because we can create and believe in fictional stories.
Because every large-scale human corporation, whether a religion or nations
or corporations, are based on mythologies, on fictions. Again, I'm not just talking about
gods. This is the easy example. Money is also a fiction that we created. Corporations are a
fiction. They exist only in our minds. Even lawyers will tell you that corporations are a fiction they exist only in our minds even lawyers
would tell you that corporations are legal fictions
and this is on the one hand
such a
source of immense power
but on the other hand
again the danger is that we completely
lose touch with reality
and we are
manipulated by all
these fictions, by all these stories.
Again, stories are not bad.
They are tools.
As long as we use them to cooperate and to help each other, that's wonderful.
Money is not bad.
If we didn't have money, we would not have a trade network.
Everybody would have maybe with their friends and family to produce everything by themselves,
like the chimpanzees do. The fact that we can enjoy food and clothing and medicines and
entertainment created by people on the other side of the world is largely because of money.
But if we forget that this is a tool that we created in order to help ourselves, and instead, this tool kind of enslaves us and runs our life.
And, you know, I'm now just back home in Israel.
There is a terrible war being waged.
And most wars in history, and also now, they are about stories.
They're about fictions. People think that
humans fight over the same things that wolves or chimpanzees fight about, that we fight about
territory, that we fight about food. It sometimes happens, but most wars in history were not really
about territory or food. There is enough land, for instance, between the Jordan River and the Mediterranean
to build houses and schools and hospitals for everybody.
And there is certainly enough food.
There's no shortage of food.
But people have different mythologies,
different stories in their minds,
and they can't find a common story they can agree about.
And this is at the root of most UN conflicts.
And being able to tell the difference between what is a fiction in our own mind and what is
the reality, this is a crucial skill. And we are not getting better at finding this difference as time goes on.
And also with new technologies, which I write about a lot, like artificial intelligence,
the fantasy that AI will answer our questions, will find the truth for us,
will tell us the difference between fiction and reality. This is just another fiction. I mean,
AI can do many things better than humans, but for reasons that we can discuss, I don't think that
it will necessarily be better than humans at finding the truth or uncovering reality.
It strikes me that the thing that made us successful,
you know, this ability to believe in fictions,
and I use the word successful, you know.
Powerful, yeah.
Powerful, yes.
We took over the world.
The thing that made us powerful
could well be the thing that makes us powerless.
In the sense that our ability to believe in fictions
and stories create a society that would potentially lead to our powerlessness.
That's kind of one of the messages that,
when I connect the dots throughout your work,
when you look off into the future, I'm left feeling.
And even when you think about the modern problems we have,
those are typically consequences of our ability to believe in stories yeah and to believe in fictions and if you play that forward 100 years maybe 200
years you don't believe that um you believe we'll be the last of our species right i think we are
very near the kind of end of our species it doesn't necessarily mean that we'll be destroyed in some huge nuclear war or something like that.
It could very well mean that we'll just change ourselves.
Using bioengineering and using AI and brain-computer interfaces, we will change ourselves to such an extent that we'll become something completely different,
something far more different from present-day Homo sapiens than we today are different from chimpanzees or from Neanderthals.
I mean, basically, you know, you have a very deep connection still
with all the other animals because we are completely organic.
We are organic entities.
Our psychology, our social habits,
they are the product of organic evolution
and more specifically mammalian evolution
over tens of millions of years.
So we share so much of our psychology
and of our kind of social habits
with chimpanzees and with other mammals.
Looking 100 years or 200 years to the future, maybe we are no longer organic or not fully
organic.
You could have a world dominated by cyborgs, which are entities combining organic with
inorganic parts,
for instance, with brain-computer interfaces,
you could have completely non-organic entities.
So all the legacy and also all the limitations of four billion years of organic evolution
might be irrelevant or inapplicable to the beings of the future.
What bet would you make? Because you're saying maybe here.
I don't know. I mean, we could destroy ourselves. I think there is a greater... I mean,
to completely destroy every last single human in the world, it is possible,
given the technology that we now command, but it's very difficult.
I think there is a greater chance, and again, this is just speculation, nobody really knows,
but I think lots of people could suffer terribly, but I think it's more likely that some people will survive and then will undergo radical changes. So it's not
that humanity is completely destroyed. It's just transformed into something else. And just to give
an example of what we are talking about, organic beings like us need to be in one place at any one time.
We are now here in this room.
That's it.
If you kind of disconnect our hands or our feet from our body, we die.
Or at least we lose control of these.
I mean, and this is true of all organic entities, of plants, of animals.
Now, with cyborgs or with inorganic entities
this is no longer true they could be spread over time time and space i mean if you find a way and
people are working on finding ways to directly connect brains with computers or brains with bionic parts, there is no essential reason
that all the parts of the entity
need to be in the same room at the same time.
As you said that,
I started thinking a little bit about Neuralink
and what Elon Musk is doing
with interfacing us with computers.
But then I had a secondary thought,
which is if there could be two Stevens,
one here and then one in the united states right
now because we're connected to the same computer interface theoretically i could hack jack over
there i could hack his interface so there could be three stevens because i hack jack and then i
hack you and then there's four and then i could eventually try and hack the entirety of the world
or a country yeah and there could basically be one...
Once you can connect directly brains to computers,
first of all, I'm not sure if it's possible.
I mean, people like Elon Musk in Neuralink,
they tell us it's possible.
I'm still waiting for the evidence.
I don't think it's impossible,
but I think it's much more difficult
than people assume,
partly because we are very far from understanding the brain.
And we are even further away from understanding the mind.
We assume that the brain somehow produces the mind, but this is just an assumption.
We still don't have a working model, a working theory for how it happens. But if it happens, if it is possible to directly connect brains and computers
and integrate them into these kinds of cyborgs,
nobody has any idea
what happens next,
how the world would look like.
And it certainly makes it plausible
if, again, if you reach that point,
that you could have an inter-brain net
the same way that lots of computers
are connected together to form
the internet, if you can connect
also brains and computers directly,
why can't we then connect an
inter-brain net, which connects
lots of brains, as you
described.
Again, I have no idea what it means.
I think this is the point when the way that our organic brains understand reality,
even our imagination, in the end, is the product, as far as we can tell, of organic biochemistry.
So we are not equipped, I think, to have a kind of serious discussion of what a non-organic brain or a non-organic mind might be capable of doing.
How it would look like.
And all the basic assumptions that we have
about brains and minds,
they are limited to the organic types.
How do you feel about artificial intelligence
and what's happening?
This year has been a real sort of landmark year
in a big leap forward for artificial intelligence,
the conversation, public awareness um
the technology itself the investment in the technology which is always you know a very
important indicator of what's to come yeah how do you as someone that spent a lot of time thinking
about this emotionally how do you feel about it uh very concerned i mean it's moving even faster than i
expected when i wrote say homo deus in 2016 i didn't think we would reach this this point
so quickly where we are at 2023 and the world is is not ready for it
and again it's not ai has enormous positive enormous positive potential. And this should be clear.
And there is no chance of just banning AI or stopping all development in AI. I tend to speak
a lot about the dangers simply because you have enough people out there, all the entrepreneurs
and all the investors talking about the positive potential.
So it's kind of my job to talk about the negative potential, the dangers. But there is a lot of
positive potential. And humans are incredibly capable in terms of adapting to new situations.
I don't think it's impossible for human society to adapt to the new AI reality.
The only thing is it takes time.
And apparently we don't have that time.
And people compare it to previous big historical revolutions,
like the invention of print or the invention of or the the industrial revolution
and you hear people say yes when the industrial revolution happened in the 19th century
so you had all these prophecies of doom about how industry or in the new factories and the steam
engines and electricity how they will destroy humanity or destroy our psychology or whatever. And in the
end, it was okay. And when I hear these kinds of comparisons as a historian, I'm very worried
about two things. First of all, they underestimate the magnitude of the AI revolution. AI is nothing
like print. It's nothing like the industrial revolution of the
19th century. It's far, far bigger. There is a fundamental difference between AI and the printing
press or the steam engine or the radio or any previous technology we invented. The difference
is it's the first technology in history that can make decisions by itself and that can create new ideas by itself.
A printing press or a radio set could not write new music or new speeches and could not decide what to print and what to broadcast.
This was always the job of humans.
This is why the printing press and the radio set in the end empowered humanity.
That you now have more power to disseminate your ideas.
AI is different.
It can potentially take power away from us.
It can decide.
It's already deciding by itself what to broadcast on social media, it's algorithms deciding what to promote.
And increasingly, it also creates
much of the content by itself.
It can compose entirely new music.
It can compose entirely new political manifestos,
holy books, whatever.
So it's a much bigger challenge
to handle that kind. It's an independent agent
in a way that radio and the printing press were not. The other thing I find worrying about the
comparison with, say, the Industrial Revolution is that, yes, in the end, in a way, it was okay, but to get there, we had to pass through some terrible
experiments. When the Industrial Revolution came along, nobody knew how to build a benign
industrial society. So people experimented. One big experiment was European imperialism. Many people thought that
to build an industrial society means building an empire, unless you have an empire that controls
the sources of the raw materials you need, iron, coal, rubber, cotton, whatever, and unless you
control the markets, you will not be able to survive as an industrial society.
And there was a very close link, also conceptually,
between building an industrial society and building an empire.
And all the leaders, the initial leaders of the Industrial Revolution,
built empires.
Not just Britain and France,
also small countries like Belgium,
also Japan.
When it joined the Industrial Revolution,
it immediately set about conquering an empire.
Another terrible experiment was Soviet communism.
They also thought,
how do you build an industrial society?
You build a communist dictatorship.
And it was the same with Nazism.
You cannot separate communism and Nazism from the Industrial Revolution.
You could not have created a communist or a Nazi totalitarian regime in the 18th century.
If you don't have trains, if you don't have electricity, if you don't have radio, you cannot create a totalitarian regime.
So these are just a few examples of the failed experiments.
You know, you try to adapt to something completely new.
You very often experiment and some of your experiments fail. And if we now have to go in the 21st century through the same process, okay,
we now have not radio and trains, we now have AI and bioengineering, and we again need to
experiment, perhaps with new empires, perhaps with new totalitarian regimes, in order to discover how to build a benign AI society,
then we are doomed as a species.
We will not be able to survive another round
of imperialist wars and totalitarian regimes.
So anybody who thinks,
hey, we've passed through the Industrial Revolution
with all the prophecies of doom,
in the end, we got it right?
No. As a historian, I would say that I would give humanity a C- on how we adapted to the Industrial Revolution.
If we get a C- again in the 21st century, that's the end of us.
It seems quite trivial to many that the AI revolution has seemed to begun with large language models.
And when I read Sapiens, this book I have here, language was so central to what made us powerful
as homo sapiens. In the beginning was the word. I didn't say it. You know, it's a very, very widespread idea that ultimately our power is based on words.
The reason that we control the world and not the chimpanzee or the elephants is because we had a much more sophisticated language, which enabled us again to tell these stories, stories about ancestral spirits and about Gaudian gods and
about our tribe, our nation, which formed the basis for cooperation. And because we could
cooperate, you could have a thousand people, a thousand humans cooperating in a tribe,
whereas the Neanderthals could cooperate only on the level of, say, 50 or 100 individuals,
this is why we rule the world and not the Neanderthals.
And you look at every
subsequent kind of
growth in human power
and you
see the same thing.
That ultimately
you tell a story with words.
And language
is like the master key
that unlocks all the doors of our civilization.
Whether it's cathedrals or whether it's banks,
they're based on language, on stories we tell.
That again, it's very obvious in the case of religion.
But also if you think about the world's financial system.
So money has no value
except in the stories
that we tell and believe each other.
If you think about gold coins
or paper banknotes
or cryptocurrencies like Bitcoin,
they have no value in themselves.
You cannot eat them or drink them
or do anything useful with them.
But you have people telling you very compelling stories about the value of these things. And
if enough people believe the story, then it works.
They're also protected by language. Like my cryptocurrency is protected by a bunch of
words. Yeah. They're created by words
and they function with words and symbols.
When you communicate with your banker,
it's with words.
I mean, what happens when AI can create deep fakes
of your everything, your voice, your image,
the way you talk, the type of words you use.
So there is already an arms race between banks and fraudsters.
I mean, we want the easiest communication with our banker.
I just pick up the phone, I tell a few words, and they transfer a million dollars.
But at the same time, I also want to be protected from an AI that impersonates my voice and
tone of voice and whatever.
And this is becoming difficult.
But on a deeper level, again, AI could create,
because money is ultimately made of words, of stories,
AI could create new kinds of money.
The same way that, you know, cryptocurrencies like Bitcoin
have been created simply by somebody
telling people a story
and enough people finding this story convincing.
And I guess as a CEO
and as an entrepreneur,
you know that if you want to get investments,
what really gets investments
is a good story.
And what happens to the financial system if increasingly our financial stories are told by AI?
And what happens to the financial system and even to the political system if AI eventually creates new financial devices that humans cannot understand.
Already today, much of the activity on the world markets is being done by algorithms.
At such a speed and with such complexity that most people don't understand what's happening there.
If you had to guess, what is the percentage of people in the world today that really understand the financial system? What would be your kind of... Less than 1%. Less than 1%. Okay, let's be kind of
conservative about it. 1%, let's say. Okay. Fast forward 10 or 20 years, AI creates such complicated financial devices that there is
not a single human being on earth that understands finance anymore.
What are the implications for politics?
Like you vote for a government, but none of the humans in the government, not the prime
minister, not the finance minister, nobody understand the financial system. They just rely on AI to tell them what is happening.
Is this still a democracy? Is this still a human form of government in any way?
What do you say to someone that hears that and goes, ah, that's just, that's nonsense.
That's never going to happen. Why not? I mean, let's look back 15 years to the last big financial crisis in 2007, 2008.
This financial crisis, to a large extent, began with these extremely complicated financial
devices, CDOs.
What's the acronym?
Collateral Debt something.
I don't even know what the word letter stands for
you had these kind of whiz kids in wall street inventing a new financial device that nobody
except them really understood which is why also it wasn't regulated effectively by the banks and
governments and it worked well for a couple of years and then it brought down the world's financial system.
And what happens if now AIs come with even more sophisticated financial devices, and
for a couple of years everything works well, they make trillions of dollars for us, and
then one day it doesn't.
One day the system collapses and nobody understands what is happening.
And again, it's not that you didn't go to college
or whatever, no.
It's just objectively,
the complexity of the system has reached a point
when only an AI is able to crunch the numbers,
is able to process enough data
to really grasp the shape,
the dynamics of the financial system.
We're already there, though.
I think if anyone does understand
how the financial system works and the markets work,
it is a bunch of homo sapiens
relying on a computer to tell it something and it
trusting that computer's calculations.
Yeah. And this will get more and more complicated and sophisticated. And for people
who say, no, it's not going to happen, the question is, what is stopping it? I mean, you know, in all the discussions about AI,
the kind of dangers
that draw people's attention,
like the poster child of AI dangers,
is things like AI creating a new virus
that kills billions of people,
a new pandemic.
So you have a lot of people concerned
about how do we prevent an AI
by itself, or maybe some small terrorist organization, or even a 16-year-old teenager
giving an AI a task to create a dangerous virus and release it to the world. How do we prevent
this? And this is a serious concern, and we should be concerned about it. But this gets a lot more attention than the question, how do we prevent the financial system from becoming so complicated
that humans can no longer understand it? And I see a lot of regulations being at least considered
how to prevent AI from creating dangerous new viruses,
I don't see any kind of effort to keep the financial system
at a level that humans understand it.
Why do you think that is?
I mean, I had a guess.
My guess was, why would the UK cut off then?
Why would they give themselves a
disadvantage exactly when you know there it just means that the uk will suffer and if america is
using a really advanced ai algorithm to get ahead we have to keep up yeah it's it's the logic of the
arms race and again it's not all bad i mean uh you have a better financial system uh you have a more
prosperous economy i mean money isn't bad. I mean, it's the basis
for almost all human cooperation. And a lot of financial devices, in the end, if you think,
what are they? They are devices to establish trust between people, especially trust between
strangers. And money, in essence, is a device for establishing trust. I don't know you, you don't know me, but we both trust this gold coin or piece of paper
so we can cooperate on sharing food or creating medicine.
And the most sophisticated financial devices, they basically do the same thing.
Stocks and bonds and these CDOs,
they are a method to establish trust. And when you open a new bank account,
the most important thing is how do I trust the bank to really take care of my money and to follow
my instructions, but not to be open to fraud and things like that.
And again, you as an investor, when you try to get money from, or you as an entrepreneur,
when you try to get money from investors, the biggest issue is always trust.
And if somebody can come up with a new way to establish trust between people,
that's a good thing.
But if this new way increasingly depends on non-human intelligence,
on systems that humans cannot understand,
that's the big question. What happens to human society
when the trust that is at the basis
of all social interactions
is actually no longer trust in humans,
it's trust in a non-human intelligence
that we don't fully understand
and that we cannot anticipate.
And part of the problem with regulating AI or AI safety, it goes back to what we discussed earlier,
that AI is different from printing presses or radio sets or even atom bombs.
If you want to make nuclear energy safe, then you need to think about all the different ways that a nuclear power station can have an accident.
And I guess there is a limited number of things that can go wrong. And ideally, if you think hard, if you have enough people thinking hard enough,
you can make safe nuclear reactors, safe nuclear power stations.
Now, but AI is fundamentally different because AI keeps changing. It keeps reacting to the world it keeps reacting to you coming up with new inventions new ideas
new decisions so making ai safe is a bit like making a nuclear reactor safe taking into account
the fact that the nuclear reactor can decide to change in ways that you can't anticipate, and even worse, it can react to you.
So if you build a particular safety mechanism
for the nuclear reactor,
what happens if the nuclear reactor says,
oh, they built this mechanism, let's do that,
to somehow get around the safety mechanism?
We don't have this problem with nuclear reactors,
but this is the problem with ai we are trying to
contain something which is an independent agent and which might actually come to understand us
better than we understand it i'm really curious about how this will impact you know you talked
about elected officials there
and how their systems will be sort of,
their financial decision-making
might be driven by algorithms.
But governments and authority itself,
I've pondered recently
whether there'll come a day
in the not-so-distant future
where we might vote for an algorithm,
where we might vote for an AI to be our government.
Is that crazy thinking? I think we're quite a long way off from there. We would still want
humans, at least in the symbolic role of being the prime minister, the member of parliament,
whatever, the president. The real problem is that increasingly these humans could be kind of
uh figureheads or puppets when the real decisions the most consequential decisions
are uh are made by algorithms partly because the the um it will just be too complicated for the humans at the top to understand the situation or to understand the different options.
So going back to the financial example, so imagine that telling the prime minister that we are facing a
financial meltdown and that we have to do something within the next i don't know 30 minutes to prevent
a national or global financial meltdown and there are like three options and the algorithm recommends option a and there was just
not enough time to explain to the prime minister how did the algorithm reach the conclusion and
even what is the meaning of these different options and again people think about this
scenario mostly in relation to war that what happens if you have an algorithm in charge
of your security system,
and it alerts you to a massive incoming cyber attack,
and you have to react immediately.
And this could, if you react in a specific way,
this could mean war with another nation,
but you just don't have enough time to understand how the algorithm reached the decision
and how the algorithm was also able to determine that of all the different options,
this is the best option.
Do you think that humans believe we're more complicated and special than we actually are?
Because I think part of many of the rebuttals when we talk about artificial intelligence
stem back to this idea that we're innately genius, creative, spiritual, special,
different from artificial intelligence?
Like our intelligence is somewhat divine
or we've got free will and we...
Yeah, I mean, if the argument is
we have free will, we have a divine soul,
and therefore no algorithm
will ever be able to understand us
and to predict our decisions
or to manipulate us,
then this is a very common argument,
but it's obviously nonsensical.
I mean, even before AI,
it was, even with previous technology,
it was possible to a large extent
to predict people's behavior
and to manipulate them.
And AI just takes it to the next level.
Now, with regard to the discussion of free will,
my position is
you cannot start with the assumption
that humans have free will.
If you start with the assumption that humans have free will if you start with this assumption then it's uh actually is very it it makes you very incurious lacking curiosity about about yourself
about human beings it kind of closes off the investigation before it began. You assume that any decision you
make is just a result of my free will. Why did I choose this politician, this product, this spouse?
Because it's my free will. And if this is your position, there is nothing to investigate. You
just assume you have this kind of divine spark within you
that makes all the decisions, and there is nothing to investigate there.
I would say no. Start investigating, and you'll probably discover that there are a lot of factors,
whether it's external factors like cultural traditions and also internal factors
like biological mechanisms that shape your decisions you chose this politician or this
spouse because of certain cultural traditions and because of certain biological mechanisms, your DNA, your brain structure, whatever.
And this actually makes it possible for you to get to know yourself better.
Now, if after a long investigation, you have reached the conclusion that yes,
there are cultural influences, there are political influences,
there are genetic and neurological influences, there are political influences, there are genetic and neurological
influences, but still, there is a certain percentage of my decision that cannot be
explained by any of these things, then okay, call it free will. And we can discuss it.
But don't start with this assumption, because then you lose the incentive to explore yourself.
And anybody who embarks on such a process of self-exploration, whether it's in therapy,
whether it's in meditation, whether it's in the laboratory of a brain scientist or as a historian in the archive, you will be amazed
to discover how much of your decisions are not the result of some mystical free will.
They are the result of cultural and biological factors. And this also means that you are vulnerable to being deciphered and manipulated by political parties, by corporations, by AI.
People who have this kind of mystical belief in free will are the easiest people to manipulate because they don't think they can be manipulated.
And obviously they can.
We humans should get used to the idea
that we are no longer mysterious souls.
We are now hackable animals.
That's what we are.
You said that at the World Economic Forum.
Yeah.
Again, this is the same point, basically,
that it's now possible to hack human beings not just to hack our
smartphones our bank accounts our computers but to really hack our brains our minds and to uh
predict our behavior and manipulate our behavior more than in any previous time in history the
other line that you said uh which really made me think and ponder was um as previously human life was
about the drama of decision making and without this we won't have a meaning in life yeah that
if you look you know at politics at religion and at culture people told stories about their lives or the lives of people in general
as a kind of drama
of decision making
that you reach a particular
junction in life
and you need to choose
you need to choose between good and evil
you need to choose
between political parties
you need to choose what to study at university
or where to work,
what kind of job to apply to.
And our stories revolved around these decisions.
And what happens to human life if increasingly the power to make decisions
is taken from us. And increasingly, it's algorithms making all these
decisions for us or about us. Is that possible? It's already happening. Increasingly, you know,
you apply to a bank to get a loan. In many places, it's no longer a human banker who is making this
decision about you,
whether to give you a loan, whether to give you a mortgage.
It's an algorithm analyzing billions of bits of data about you
and about millions of other customers or previous loans,
determining whether you're creditworthy or not.
And if you ask the bank if they refuse to give you a loan, and you ask the bank, if they refuse to give you a loan,
and you ask the bank,
why didn't you give me a loan?
And the bank says,
we don't know.
The computer said no.
And we just believe our computer,
our algorithm.
And it's happening also
in the judicial system increasingly
that various judicial decisions, verdicts, like for how many, like the judge
decided that you committed some crime, the sentence, whether to send you to two months
or eight months or two years in prison is increasingly determined by an algorithm.
You apply to a place at university, you apply to a job.
This too is increasingly decided by algorithms.
Dating.
Dating, yes.
I mean, even unbeknownst to you,
the algorithms of the dating apps that you're using
are shaping your romantic life.
But in a world of you know robotics and artificial
intelligence why do i need to find a person at all why not just have a relationship with with a
robot or with an ai yeah uh we do see the beginning of of this that people are building more and more intimate relationships with non-human intelligences,
with AIs and bots and so forth. And this raises a lot of difficult and profound questions.
Now, part of the problem is that the AIs are built to mimic intimacy.
That the ability, intimacy is an extremely powerful thing,
not just in romance, also in the market, also in politics.
If you want to change somebody's mind about anything,
political issue, a commercial preference.
Intimacy is kind of the most powerful weapon.
And somebody you really trust, somebody you have intimate relationships with, will be
able to change your views on a lot of things more than someone you see on TV or just an article you read in a newspaper.
There is a huge incentive for the creators of AIs
to create AIs that are able to forge
intimate relationships with humans.
And this makes us extremely vulnerable
to this new type of manipulation that was previously just
unimaginable. Because loneliness is at, you know, all-time highs, especially in the sort of Western
world, and sexlessness. And I was reading some stats about how the like body bottom 50% of men
in particular are having almost no sex relative to the top sort of 10 percent and you think you
know this disparity the rise of digitalization loneliness we're in our homes on screens more
than ever before and then you hear about this industry of ai and sex dolls and all this and
you just wonder you play it forward and go oh yeah it's going it's going there and and the thing is that it it's not that the humans are so stupid or something that they
they they kind of project something onto the ai and fall in love with an ai chatbot the ai is
deliberately built created trained to fool us to the same way you know you look at the previous 10 years
there was a big battle for human attention there was a battle between different social media giants
and whatever how to grab human attention and they created algorithms that were really amazing at grabbing people's attention.
And now they are doing the same thing, but with intimacy.
And we are extremely exposed.
We are extremely vulnerable to it.
Now, the big problem is, and again, this is where it gets kind of really philosophical, that what humans really want or need from a relationship is to be in touch
with another conscious entity. An intimate relationship is not just about providing my
needs. Then it's exploitative, then it's abusive. If you're in a relationship and the only thing you think about is,
how would I feel better?
How would my needs be provided for?
Then this is a very abusive situation.
A really healthy relationship is when it goes both ways.
You also care about the feelings and the needs of the other person, of the other entity.
Now, what happens if the other entity has no feelings, has no emotional needs,
because it has no consciousness? That's the big question. And there is a huge confusion between consciousness and a car. This is intelligence.
Consciousness is the ability to feel things
like pain and pleasure and love and hate
and sadness and anger and so many other things.
Now, in humans and also in other mammals,
intelligence and consciousness actually go together.
We solve problems by having feelings.
But computers are fundamentally different.
They are already more intelligent than us
in at least several narrow fields,
but they have zero consciousness.
They don't feel anything.
When they beat us at chess or go or some other game, they don't feel joyful and happy.
If they make a wrong move, they don't feel sad or angry.
They have zero consciousness.
As far as we can tell, they might soon be far more intelligent than us and still have zero consciousness.
Now, what happens when you are in a relationship with an entity which is far more intelligent than you
and can also imitate, mimic consciousness it knows how to solve the problem of making you feel as if it is
conscious but it still has no feelings of its own and this is a very disturbing vision of the future
because it opens us up to manipulation is that what you're saying it first. Because it opens us up to manipulation. Is that what you're saying?
First of all, it opens us to manipulation,
but also the big question,
what does it mean for
the health of
our own mind, of our own
psyche?
If we are in a relationship,
or many of our important relationships
in life are with non-conscious
entities that uh that they don't really have any feelings of their own again they are very good
at faking it they are very good at catering to our feelings but um again it's just
it's just manipulation in the end.
Are you optimistic about the happiness of humans going forward?
Or do you think happiness will take its own?
I've heard you talk about how happiness might just become a biochemical prescription or something.
Yeah, I mean, we don't have a good track record with regard to happiness.
If you look at the last 100,000 years from, say, the Stone Age until the 21st century,
you see a dramatic rise in human power.
We are thousands of times more powerful as a species and as individuals than we were
in the Stone Age.
We are not thousands of times happier.
We just don't really know how to translate power into happiness.
And this is very clear when you look at the lives of the most powerful people in the world,
that there is no correlation between how rich and powerful you are
and how happy you are as a
person. I mean, I don't get the impression that people like, I don't know, Vladimir Putin
or Elon Musk are the happiest people in the world, even though they are some of the most powerful
people in the world. So there is no reason to think that as humanity gets even more powerful
in coming decades, we will get any happier.
And understanding happiness is about understanding the deep dynamics
of not even the brain, but of the mind, of consciousness.
And we are just not there yet.
We are very, very good.
And the related problem is that humans usually understand
how to manipulate something long before they understand
the consequences of the manipulations.
If you look at the outside world,
at the ecological system,
we have learned how to cut forests,
how to build huge dams over rivers,
long before we understood
what will be the consequences
for the ecological system,
which is why we now have this ecological crisis.
We manipulated the world without understanding the consequences.
Something similar might happen with the world inside us. With more powerful medicines,
with brain-computer interfaces, with genetic engineering, and so forth,
we are gaining the power to manipulate our internal world, the world within us.
But again, the power to manipulate is not the same thing as understanding the complexity of
the system and the consequences of the manipulation.
A related manipulation there is immortality and our pursuit of it.
I've sat with people on this podcast who are committing their lives to staying alive forever.
And there's a through line there between our desire to be immortal,
the rise in the scientific discoveries that are enabling that, and our happiness.
I've often thought much of the reason why things are special in my life is because they're scarce including my time yeah
and i always i almost wonder about the psychological um issues i would face if i knew i was immortal
like if i knew that the partner i'm with doesn't come at the expense
of another one I can be with, you know, at 30 years old. And the car, you know, the choices
you make, I think what makes them valued are their scarcity against the backdrop of a finite life.
Yeah, it will definitely change everything if you think about relations between parents and children
so if you live forever so that 20 years you raised uh you spent raising somebody 2000 years ago
what do they mean now but i think long before we get to that point i mean most of these people are
going to be incredibly disappointed because it will not happen within their lifetime.
Another related problem is that we will not get to immortality.
We will get to something that maybe should be called a mortality.
That immortality is that like your God, you can never die no matter what happens it's even if we solve cancer and alzheimer and dementia and whatever
we will not get there we will get to kind of a life without a definitive expiry date
that you can live indefinitely you can go every 10 years to a clinic and get yourself rejuvenated, but if a bus runs you over or
your airplanes explode or a terrorist kills you, you're dead and you're not coming back
to life.
Now, realizing that you have a chance to live forever. But if there is an accident, you die.
This creates a level of anxiety and terror unlike anything that we know in our own lives.
I think the people who will be in that situation
will be extremely anxious and miserable.
And another issue is, you know,
people often spend so much effort trying to get gain something get
something without really understanding what are they going why what will you do with it what is
so good about it you know like people spend so much effort to to get have more and more money
instead of thinking what will I actually do with that
money? So it's the same with the people who want to extend life forever. What is so good about life
that what will you do with it? And if you know it, why don't you do it already?
I hear people saying about how how precious human consciousness is
why what do you think it's so precious and whatever it is why don't you do it right now
i mean why spend your life developing some kind of treatment that will extend your consciousness for a thousand years
just spend your time doing now whatever you think you would be doing with your consciousness a
thousand years from now so if they were to say you bet it'll give me more time with my family
you're saying just instead of wasting your time, just like...
Exactly.
So, you know, somebody who has no time
for their family at all right now
because they are busy developing
the kind of miracle cure
that will enable them to spend time
with their family in 200 years.
This makes no sense.
I think about the disparity
that artificial intelligence and these forms of
sort of bioengineering might create, because it's conceivable that the rich will gain access to
these technologies first. And then, you know, when we think about bioengineering, being able to sort
of play with our genetic code, that means if I, for example, managed to get my hands on some kind of bio engineering
treatment to make sure that my kids were maybe a little bit smarter maybe a little bit stronger
whatever then you're going to start a sort of genetic chain of modified children that are
superior in intelligence and strength and whatever else might be desirable and then you have this
disparity in society where you have one set of humans
are on a completely different exponential trajectory
and the other humans are, you know,
they're left behind.
This is extremely dangerous.
I think we just shouldn't go there.
That we shouldn't invest a lot of resources, efforts,
in developing these kinds of upgrades and enhancements
that are very likely, at least at first,
to be the preserve of a small elite
and to translate economic inequality into biological inequality and to basically split the human
species, to split Homo sapiens into a ruling class of superhumans and the rest of us.
This is a very, very dangerous development.
Related to that is the problem that I don't think it will be these will be upgrades at all
what worries me is that a lot of these things will turn out actually to be downgrades
that we again we don't understand our bodies our brains our minds well enough to know what will be the consequences
of tweaking our genetic code or of, I don't know, implanting all kinds of devices into our brains.
People who think that this will enable them, let's say, to upgrade their intelligence,
they don't know what the side effects will be.
It could be that the same treatment that increases your intelligence
also decreases your compassion or your spiritual depth or whatever.
And the danger is that especially if this technology is in the
hands of powerful corporations armies governments they will enhance those qualities that they want
like intelligence and like discipline while disregarding other qualities which could be even more important
for human flourishing like compassion or like artistic sensitivity or like spirituality.
If I think about somebody again like Putin, what would he do with this type of technology?
Then yes, he would like an army of super intelligent and super loyal
soldiers. And if these soldiers don't have any compassion or any spiritual depth, all the better
for him. But that speaks to the arms race. And you said, we think we shouldn't, but China will
see that as an opportunity or Putin will see that as an opportunity if the if the western world if the united states or the uk don't and so again it comes back to this point of you know we're screwed
if we're damned if we do we're damned if we don't i'm not sure that in this case it works uh because
again a lot of these upgrades are likely to have um detrimental side effects, both for the person in question
and for the society as a whole.
And I think that in this case,
societies that will choose to be
a progress more slowly and safely,
they will actually have an advantage.
It's like if you say,
you know, there is some other country
where they don't have any brakes on their cars, and they don't have any seatbelts,
and they release new medicines without checking their side effects. They're moving so fast,
we are left behind. No, it makes no sense to imitate them. This will actually ruin their
societies. You don't want to imitate these kinds of harmful effects.
With development of AI, it's different.
I think there, the advantages in things like finance,
like the military, will be so big
that an AI arms race is almost inevitable.
But with trying to kind of bioengineer humans,
if you go too fast, it will be self-destructive.
So we can take it more slowly and safely
and without being kind of left behind in an arms race.
You said on the Tim Ferriss podcast,
the best scenario is that homo sapiens will disappear,
but in a peaceful and gradual way
and be replaced by something better.
It's quite an uncomfortable statement to listen to.
I think that, again, the type of technologies
that we are now developing,
when you combine them with the human ambition
to improve ourselves,
it's almost inevitable that we will use these technologies to change ourselves.
The question is whether we will do it slowly and responsibly enough
for the consequences to be beneficial.
But the idea that we can now develop these extremely powerful tools of bioengineering
and AI and remain the way we are, we'll still be the same homo sapiens in 200 years, in
500 years, in 1,000 years.
We'll have all these tools to connect brain to computers,
to kind of re-engineer our genetic code,
and we won't do it?
I think this is unlikely.
One of the outstanding questions that I have,
and one of the sort of observations I've had,
is people like Sam Altman,
the founder of OpenAI that made ChatGPT,
started working on universal basic income products
like Wildcoin. And I thought,
do you know what, that's curious that the people that are
at the very forefront of this AI
revolution are now trying to
solve the second problem they see coming
which is people not having
jobs, essentially.
Do you think that's, because
I've spoken a lot this year
on stages and this is one of the questions I always get asked is,
the implications of AI and jobs as we know it in the workforce.
Is it realistic to believe that most jobs will disappear as we know them today?
I think many jobs, maybe most jobs will disappear,
but new jobs will emerge. You know, most jobs that
people do today didn't exist 200 years ago. Like this?
Yeah, like this, like doing a podcast. And there will be new jobs. The really big problem
will be how to retrain people. It demands a lot of financial support,
also psychological support,
for people to kind of relearn,
retrain, reinvent themselves,
and doing it not just once,
but repeatedly throughout their career,
throughout their lives.
The AI revolution will not be
a single watershed event.
Like you have the big AI revolution will not be a single watershed event like you have the big AI revolution
in 2030
you lose 60% of jobs
you create lots of new jobs
you have 10 difficult years
everybody adjusting, adapting
reskilling, whatever
and then everything settles down
to a new equilibrium
it won't be like that
AI is nowhere near its full potential.
So you'll have a lot of changes by 2030,
even more changes by 2040,
even more changes by 2050.
You will have new jobs,
but the new jobs too will change and disappear.
What new jobs?
In a world where intelligence is disrupted,
what jobs are left?
Because you say you're going to retrain me.
I'm like, I'm not going to be able to keep up with an AI that's retraining every second.
I'm not sure.
I mean, some of the answers might be counterintuitive.
That at least at present, we see that AI is extremely good at automating jobs that only require cognitive skills.
But they are not good at jobs that require motor skills and social skills.
So if you think about, say, doctors and nurses.
So at least those types of doctors who are only doing cognitive work. They read articles, they get your medical results,
all kinds of tests and whatever, they diagnose your disease, and they decide on a course of
treatment. This is purely cognitive work. This is the easiest thing to automate. But if you think about a nurse that has to
replace a bandage to a crying child, this is much more difficult to automate.
You don't think that's possible to automate?
I think it is possible, but not now. You need very delicate motor skills and also social
skills to do that.
Did you see Elon's video the other day with the Tesla robot?
I see a lot of these videos.
It's getting the egg and it's cracking the egg and it's going like this.
No, I'm not saying it's impossible.
I'm just saying it will take longer.
It's more difficult.
Again, there is also the social aspect.
If you think about self-driving vehicles,
the biggest problem for self-driving vehicles is humans.
I mean, not just the human drivers.
It's the pedestrians.
It's the passengers.
How do you deal with a drunken passenger?
Whatever.
So again, it's not impossible, but it's much more difficult.
So again, I think that there will be new jobs, at least in the foreseeable future.
The problem will be to retrain people.
And the biggest problem of all will be on the global level, not on the national level.
When I hear people talk about universal basic income,
the first question to ask is, is it universal or national?
Is it a system that, let's say, raises taxes on big tech corporations in Silicon Valley
in California and uses the money to provide basic services and also retraining courses for people in Ohio and Pennsylvania?
Or does it also apply to people in Guatemala and Pakistan? I mean, what happens when it becomes
cheaper to produce shirts with robots in California than in Guatemala and in Mexico?
Does Sam Altman has a vision of the US government
raising taxes in California and sending the money
to Guatemala to support the people there?
If the answer is no, we are not talking
about universal basic income, we are only talking
about national basic income in the US, then what happens to the people in Guatemala? That's the biggest question.
And a sub-question to that is about how one should be educating our children and education
institutions as they are today, because with what's to come, it makes me wonder what skill
would be worth investing 10, 12 years into a child that I had.
Nobody has any idea. I mean, if you think about specific skills, then this is the first time in
history when we have no idea how the job market or how society would look like in 20 years.
So we don't know what specific skills people will need.
If you think back in history,
so it was never possible to predict the future,
but at least people knew what kind of skills
will be needed in a couple of decades.
If you live, I don't know, in England in 1023 a thousand years ago you don't know what will happen in in 30 years
maybe the Normans will invade or the Vikings or the Scots or whoever maybe there'll be an earthquake
maybe there'll be a new pandemic anything can happen you. You can't predict. But you still have a very good idea of
how the economy would look like and how human society would look like in the 1050s or the 1060s.
You know that most people will still be farmers. You know it's a good idea to teach your kids
how to harvest wheat, how to bake bread, how to ride a horse, how to shoot
and bow an arrow, these things will still be necessary in 30 years. If you now look 30 years
to the future, nobody has any idea what kind of skills will be needed. If you think, for instance,
okay, this is the age of AI, computers, I will teach my
kids how to code computers. Maybe in 30 years, humans no longer code anything because AI is so
much better than us at writing code. So what should we focus on? I would say the only thing
we can be certain about is that 30 years from now,
the world will be extremely volatile.
It will keep changing at an ever rapid pace.
Do you think this is going to increase the amount of conflict?
Because I watched a video on your YouTube channel
where you said the return of wars.
Yeah.
That's one of the dangers.
That there is, and we see it all over the world now.
Like 10 years ago, we were in the most peaceful era in human history. And unfortunately,
this era is over. We are now in a new era of wars and potentially of imperialism.
And we are seeing it all over the world with the Russian invasion of Ukraine,
now with the war in the Middle East, Venezuela and Guyana, some East Asia, war is back on the table.
It's not just because of the rapid changes and the upheavals they cause, it's also because, you know, 10 years ago, we had a global order,
the liberal order, which was far from perfect, but it's still kind of regulated relations between
nations, between countries, based on an idea on the liberal worldview that despite our national differences, all humans share
certain basic experiences and needs and interests, which is why it makes sense for us to work
together to diffuse conflicts and to solve our common problems.
It was far from perfect,
but it did create the most peaceful era in human history.
Then this order was repeatedly attacked,
not only from outside,
from forces like Russia or North Korea or Iran
that never accepted this order,
but also from the inside, even from the United States,
which was the architect to a large extent of this order,
with the election of Donald Trump, which says,
I don't care about any kind of global order.
I only care about my own nation.
And you see this way of thinking, that I only care about the interests of my nation more and more around the world.
Now, the big question to ask is, if all the nations think like that, what regulates the relations between them?
And there was no alternative. Nobody came up with the and said, okay, I
don't like the liberal global order, I have a better suggestion for how to manage relations
between different nations. They just destroyed the existing order without offering an alternative. And the alternative to order is simply disorder.
And this is now where we find ourselves. Do you think there's more wars on the way?
Yes. Unless we re-establish order, there will be more and worse wars coming in the next few years
in more and more areas around the world you see defense budgets
all over the world uh skyrocketing and this is a vicious circle when your neighbors
increase their military budget you feel compelled to do the same and then they increased their budget even more. You know, when I say that
the early 21st century was the most peaceful era in human history, it's one of the indications
is how low the military budgets all over the world were. For most of history, kings and emperors and khans and sultans,
the military was the number one item on their budget. They spent more on their soldiers and
navies and fortresses than on anything else. In the early 21st century, most countries spent something like a few percentage points
of their budget on the military.
Education, healthcare, welfare were a much bigger item on the budget than defense.
And this is now changing. The money is increasingly going to tanks and missiles
and cyber weapons instead of to nurses and schools and social workers. And again, it's not inevitable.
It's the result of human decisions. The relatively peaceful era of the early 21st
century, it did not result from some miracle. It resulted from humans making wise decisions
in previous decades. What are the wise decisions we need to make now, in your view?
Reinvest in rebuilding a global order, which is based on universal values and norms
and not just on the narrow interests of specific nation states.
Are you concerned that Trump might be elected again shortly?
I think it's very likely.
And if it happens, it is likely to be the kind of like the death blow
to what remains of the global order and he says it and
he says it openly now again it should be clear that many of these politicians they present
a false dichotomy a false binary vision of the world as if you have to choose between patriotism and globalism, between being
loyal to your nation and being loyal to some kind of, I don't know, global government or whatever.
And this is completely false. There is no contradiction between patriotism and global
cooperation. When we talk about global cooperation,
we definitely don't have in mind,
at least not anybody that I know,
a global government.
This is an impossible and very dangerous idea.
It simply means that you have certain rules and norms
for how different nation states treat each other
and behave towards each other.
If you don't have a system of global norms and values, then very quickly what you have
is just global conflict, is just wars.
I mean, some people have this idea, they imagine the world as a network of friendly fortresses
like each nation will be a fortress with very high walls taking care of its own interest
interests but uh living on relatively friendly terms with the neighboring fortresses trading
with them and and whatever now the main problem with this vision is that fortresses, trading with them and whatever.
Now, the main problem with this vision is that fortresses are almost never friendly.
Each fortress always wants a bit more land,
a bit more prosperity,
a bit more security for itself
at the expense of the neighbors.
And this is the high road to conflict and to war.
There's that phrase, isn't there?
Ignorance is bliss.
Now, something that your work has forced you
and continues to encourage you to not live in is ignorance.
So with that, one might logically deduce that
out the window goes your bliss.
Are you happy?
I think I'm relatively happy, at least happier than I was for most of my life.
Part of it is that I invest a lot of my time,
not just in, you know,
researching what is happening in the world,
but also in the health of my own mind.
And,
you know,
keeping a kind of balanced information diet,
that it's,
it's,
it's basically like with food. you need food in order to survive
and to be healthy but if you eat too much or if you eat too much of the wrong stuff it's it's bad
for you and it's exactly the same with information information is the the food of the mind. And if you eat too much of it, of the wrong kind, you'll get a very sick mind.
So I try to keep a very balanced information diet, which also includes information fasts.
So I try to disconnect. Every day I dedicate two hours a day for meditation. And every year I go
for a long meditation retreat of between 30 and 60 days, completely disconnecting, no
phones, no emails, not even books. Just observing myself, observing what is happening inside my body and inside my mind,
getting to know myself better and kind of digesting all the information that I absorbed
during the rest of the year or the rest of the day.
Have you seen a clear benefit in doing that?
Yes, very, very clear.
I don't think I would be able to write these books
or to do what I'm doing
without this kind of information diet
and without kind of devoting a lot of time and attention
to the balancing my mind and keeping it healthy.
You know, so many people spend so much time
keeping their body healthy,
which is very important, of course,
but we need to spend equal amount of attention
with our mind.
It is as important as our body.
When you said you don't think you'd be able to do what you do
if you didn't take these information diets, why?
I'll just, you know, first of all i'll be just overwhelmed and uh not have any kind of
peace of mind not have any kind of perspective if you're constantly in the news cycle in the
information cycle you lose all perspective you know know, organic entities, unlike AIs, unlike computers,
we are cyclical entities.
We need to sleep every day.
AIs don't sleep.
You know, even the stock exchange closes.
Every afternoon it closes, also for the weekend,
also for Christmas.
If you think about it, this is amazing that, you know, if a war erupts in Christmas, the Wall Street will be able to react only after a couple of days because the people are on holiday.
They took time off.
Even the money market takes time off. But if you give AI
full control, there will never be any time off. It will be 24 hours a day, 365 days a year.
And people just collapse. I mean, I think part of the problem that politicians today face is that they need to be on
24 hours a day because the news cycle is on 24 hours a day. Like in previous eras, if you're,
I don't know, a king in the Middle Ages and you go somewhere, you're on the road in your carriage
and nobody can reach you. Even if the French are invading, nobody can reach you.
You have some time off.
If you're a prime minister now, there is no time off.
And computers are built for it, but human brains aren't. If you try to keep an organic entity awake
and kind of constantly processing information and reacting 24 hours a day,
it will very soon collapse.
It's funny, it made me think of what the, I think it's the former Netflix CEO,
one of the Netflix CEOs or someone said,
they said, our biggest competitor is sleep.
Sleep, yeah.
That's a very scary and very, I think, important line.
And it's a very honest line.
It's a very honest line.
And it's scary because if people don't sleep,
they collapse and eventually they die.
And this is part of the problem
that we talked earlier about,
about the battle for human attention
in social media, in streaming services.
Now, for many of these corporations,
they measure their success by user engagement.
The more people are engaged, the more successful we are.
Now, user engagement is a very broad definition. According to this measurement,
one hour of outrage is better than 10 minutes of joy and certainly better than one hour of sleep.
Because one hour of outrage, I will consume three adverts.
Yes. And then that means that the corporation make 30 for example yeah and and from two hours of sleep they make
nothing from 10 minutes of joy maybe they sell only one ad and but from the viewpoint of of how
humans function and how this organism functions,
10 minutes of joy are probably better for us than one hour of outrage.
And certainly we need not just two hours, we need six, seven, eight hours of sleep.
Well, this is why the algorithms on certain platforms, specifically TikTok,
are just absolutely addictive, to say the least like
i because they hacked us yeah they it's literally they you know tick we had you know a certain level
of addiction to the previous social algorithms and then tiktok came along and said hold my beer
and they just went for it you know and and they've won because of that i see 60 year olds
absolutely addicted to tiktok and because they don't understand the concept of an algorithm
sometimes um and they don't understand like the the advertising model and all of that stuff
it's it's hypnotism they're like absolutely hypnotized my funnily enough my driver's one
of them so my driver's outside whenever i walk up to his car he's just like this on tiktok he's scrolling and
i had a conversation with him last night i'm like do you realize that tiktok has your brain yeah
you know absolutely you know and we're just at the very foot sort of the first steps of an
exponential curve of algorithms competing for our attention and our brain. We haven't seen anything yet.
I mean, these algorithms, they are what, like 10 years old?
In terms of, you think about these social media algorithms
and the algorithms that get to know you personally,
to hack your brain, and then grab your attention.
They are 10 years old.
And the companies die if they don't beat the other algorithms.
So like Twitter now, when Elon took it over,
and I think people will relate to this if you use Twitter,
suddenly I've seen more people having their heads blown off
and being hit by cars on Twitter
than I'd ever seen in the previous 10 years.
And I think someone at Twitter has gone,
listen, this company's going to die
unless we increase time spent on this platform
and show more ads.
So let's start serving up a more addictive algorithm.
And that requires a response from Instagram.
And the other platform, so it's a real...
You know, Elon has this other company, the Boring Company,
which is about boring tunnels, of course.
But actually, it might be a good idea to make Twitter more boring
and to make TikTok more boring.
I mean, I know it's a very bad kind of
business decision, but I don't think humanity will survive unless we have more boredom.
If you ask me what is wrong with the world in 2023, it's that everybody is far too excited.
And if I had to kind of summarize what's wrong in one word,
the word is excited.
And people don't understand the meaning of this word.
People think that excited means happy.
Like two people meet, I am so excited to meet you. I have a new idea, I publish a new book, whatever.
Oh, this is such an exciting idea,
such an exciting book. And exciting isn't happy. Exciting isn't always good. Sometimes, yes,
sometimes it's good to be excited. An organism that is excited all the time dies. The meaning
of excitement is that, you know, the body is in flight or fight mode all the nerves are on all
the neurons are firing all the muscles are tense this is excitement and very often negative things
excite us fear excite fear is excitement hate is excitement anger is excitement and um you know it's when i meet a good friend
i'm often relaxed to meet the friend not excited and only much can you know we think about the
political level we have far too many exciting politicians doing very exciting things.
And we need more boring politicians.
More Bidens.
That do less exciting things.
But the brain is wired to pay attention to excitement
and to crave it.
But the brain evolved in situations
when you didn't have a constant stream of exciting videos.
Sometimes it was on, sometimes it was off.
And now our brains have been hacked.
And these devices, technologies, they know how to create constant excitement. And the more this happens, we also lose our ability,
our skill to be bored. That if we have to spend a few minutes doing nothing, somewhere waiting,
we can't do it. We immediately take out the smartphone and start watching TikTok or scrolling
through Twitter or whatever.
Did you hear about that experiment
where people would rather take an electric shock
than do nothing?
Yeah.
And, you know, you can't get, for instance,
to any level of peace of mind
if you don't know how to handle boredom.
But peace and boredom are the same way that excitement and outrage are neighbors.
Peace and boredom are also neighbors.
And if you don't know how to handle boredom, if the minute there is a hint of boredom,
you run away to some exciting thing, you will never experience peace
of mind. And people, if humans don't experience peace of mind, there is no way that the world as
a whole is going to be peaceful. If I could give you the choice to be born in 1976, as you were,
or to be born now? It would go for 1976.
I mean, the people of my generation,
we were privileged to grow up in one of the most peaceful
and most optimistic eras in human history.
The end of the Cold War,
the fall of the Iron Curtain.
I don't know of any better time uh but when i look at what is
happening right now i don't envy the people who grow up in the 2020s what is the closing
statement of hope and solution that kind of ties off this conversation what is the thing that having
someone gotten to this point in the conversation they should be thinking about doing which will
cause the domino effect that will lead us to maybe more hopeful future but we still have agency
i mean the algorithms are not yet in in control. They are taking power away from us,
but most power is still in human hands, and every human being has some level of power, of agency,
which means that each one of us has some responsibility. Now, nobody can solve all the world's problems. So focus on one thing.
Find the one thing which is close to your heart, which you have a deep understanding of, and try to make a difference there.
And the best way to make a difference is to cooperate with other people.
And the human superpower is our ability to cooperate in large numbers.
So if you care about a specific issue, don't try to be an isolated activist.
50 individuals who cooperate as part of an organization can do much, much more than 500
isolated activists, individuals.
So find your one thing.
And again, don't try to do everything.
Let other people do the rest
and cooperate with other people
on your chosen mission.
Yuval, your book, Sapiens,
changed the world in many ways.
It gave us a new perspective
and a new understanding of who we are as humans, where we've come from. And with that, we have a
roadmap for where we're going. It's celebrating its 10th anniversary. I have the 10th anniversary
edition here, which I'm going to beg you to sign for me after. And it really is a once in a
generation book. The numbers that I have are that it sold
more than 25 million copies.
And that's in a market where people said
no one's buying books anymore.
That's crazy.
That's absolutely crazy.
You're working on a new book,
which I'm very excited to hear about.
I'm sure that a little birdie told me
that it'll be announced next year.
And I'm sure everyone's incredibly energized about that.
What is the,
I ask these people the question sometimes just as a way to close off the show,
but I wanted to ask you it because it's especially pertinent to someone that's
got such a huge varying wealth of work.
Is there one particular topic that is pertinent to our future that we didn't
talk about?
I would say that when we talk about the future
history is is more relevant than ever before because history is not really the study of the
past history is the study of change, of how things change.
Nobody cares about the past for the sake of the past.
All the people who lived in the Middle Ages
or in the ancient Rome,
they're all dead.
We can't do anything
about their disasters and their misery.
We can't correct any of the wrongs that happened in ancient times.
And they don't care what we say about them.
You can say anything you want about the Romans, the Vikings, they are gone.
They don't care.
The reason to study the past is because if you understand the dynamics of change in previous centuries, in previous eras, this gives you perspective on the process of change in the present moment.
And I think the curse of history is that people have this fantasy of changing the past of bringing justice to the past and this
is just impossible you cannot go back there and and save the people there the big question is how
do you uh save the people now how do you prevent catastrophe catastrophes, perhaps, from happening?
And this is the reason to study history.
And the main message of history is that humans created the world in we know with nation states and corporations and capitalist economics
and religions like christianity and hinduism humans created this world and humans can also
change it if there is something about the world that you think is unfair, is dangerous, is problematic,
then some things are beyond our control.
The laws of physics are beyond our control.
So far, the laws of biology are also beyond our control.
But knowing what is natural, what is the outcome of physics and biology,
versus what is the outcome of human inventions, human stories, human institutions,
this is very difficult.
A lot of things that people think are just natural,
this is the way the world is, this is biology, this is physics,
they are not.
They are actually the result of historical processes.
And this is why it's so important to understand history,
to understand how things change
and to understand what can be changed.
We have a closing tradition on this podcast
where the last guest leaves a question for the next
guest not knowing who they're going to be leaving it for oh the question that's been left for you
if you could impose a global law but only one global law what would it be and why
oh great question for you i would say that people should consume less information
and
spend more time
reflecting and digesting
what they already know, what they already
heard. Thank you Yuval
it means a huge amount
to me that someone of your esteem and
someone whose books have inspired me
and turned the lights on in so many areas of my life
would have this conversation with me today.
So I thank you so much for that,
but also for turning the lights on
to the hundreds of millions of people
that have consumed your work all around the world,
the videos, the books, et cetera, et cetera.
As you said there, it's the most important work
because it helps us looking back at history
in a way that is accessible and inclusive,
in a way that even I could read
without having to be a
historian or understand very complex subject matter so thank you so so so much thank you it's
been great to be here do you need a podcast to listen to next we've discovered that people who
liked this episode also tend to absolutely love another recent episode we've done so I've linked
that episode in the description below.
I know you'll enjoy it.