The Diary Of A CEO with Steven Bartlett - Yuval Noah Harari: This Election Will Tear The Country Apart! AI Will Control You By 2034! The Dark Truth Behind Meta & X!
Episode Date: September 5, 2024Can humanity handle AI or will it be our downfall? Yuval Noah Harari looks back at history to guide us through this uncertain journey ahead. Yuval Noah Harari is a best-selling author, public intelle...ctual and Professor of History at the Hebrew University of Jerusalem. He is the author of multi-million bestseller books such as, ‘Sapiens: A Brief History of Humankind’ and ‘Homo Deus: A Brief History of Tomorrow’. In this conversation, Yuval and Steven discuss topics such as, how AI is disconnecting people, the best way to control fake news, how institutions drive trust, and how Trump could change democracy. (00:00) Intro (02:31) Will Humans Continue To Rule The World? (06:48) Why AI Is The Biggest Game-Changer In History (10:34) Is AI Just Information Or Something More? (16:53) Can AI Manipulate Our Bank Accounts And Political Views? (21:42) How AI Will Affect Human Intimacy (23:44) Will AI Replace Teachers? (25:09) Why Online Information Is Junk (28:42) How Politicians Use Fear To Manipulate Us (31:52) Should There Be A Free Speech Movement (39:38) How Algorithms Are Shaping Global Politics And Increasing Fear (45:30) The Impact Of The US Election On Global Politics (48:48) What Trump Could Do To US Democracy (50:29) Can We Trust What We See On Social Media And The News? (55:37) Will AI Eventually Run Governments? (00:58:32) What Jobs Will AI Leave For Humans? (01:02:01) Which Jobs Will Be Automated By AI? (01:05:33) Is AI Conscious? (01:07:19) AI, Robots, And The Future Of Consciousness (01:10:01) Are We Living In A Simulation? (01:13:09) How Algorithms Control Our Lives (01:16:21) Understanding The AI Alignment Problem (01:21:13) The Relationship Between AI And Corporate Interests (01:25:04) The Growing Threat Of Totalitarian Governments (01:33:08) How The Algorithm Knows It's Fake News (01:38:01) Is Yuval Tempted To Log Off? (01:41:52) Will Humans Become Two Species? (01:44:00) What's The Solution To The Negatives Of AI? (01:49:17) The Last Guest Question Follow Yuval: Instagram - https://g2ul0.app.link/7iXk5TnLAMb Twitter - https://g2ul0.app.link/eaKRt5rLAMb You can pre-order Yuval’s book ‘NEXUS: A Brief History of Information Networks from the Stone Age to AI’, here: https://g2ul0.app.link/Wae95KONAMb Watch the episodes on Youtube - https://g2ul0.app.link/DOACEpisodes My new book! 'The 33 Laws Of Business & Life' is out now - https://g2ul0.app.link/DOACBook Get your hands on the brand new Diary Of A CEO Conversation Cards here: https://appurl.io/iUUJeYn25v Follow me: https://g2ul0.app.link/gnGqL4IsKKb Sponsors: Linkedin Jobs: linkedin/doac
Transcript
Discussion (0)
Quick one. Just wanted to say a big thank you to three people very quickly. First people I want to
say thank you to is all of you that listen to the show. Never in my wildest dreams is all I can say.
Never in my wildest dreams did I think I'd start a podcast in my kitchen and that it would expand
all over the world as it has done. And we've now opened our first studio in America, thanks to my
very helpful team led by Jack on the production side of things. So thank you to Jack and the team
for building out the new American studio. And thirdly, to Amazon Music, who when they heard that we were expanding to the United
States, and I'd be recording a lot more over in the States, they put a massive billboard
in Times Square for the show. So thank you so much, Amazon Music. Thank you to our team. And
thank you to all of you that listened to this show. Let's continue.
The humans are still more powerful than the AIs. The problem is that we are divided against each other
and the algorithms are using our weaknesses against us.
And this is very dangerous
because once you believe that people who don't think like you
are your enemies, democracy collapses.
And then the election becomes like a war.
So if something ultimately destroys us,
it will be our own delusions, not the AIs.
We have a big election in the United States.
Yes, democracy in the state is quite fragile.
But the big problem is, what if...
Surely that will never happen.
Yuval Noah Harari,
the author of some of the most influential non-fiction books in the world today,
and is now at the forefront of exploring the world-shaping power of AI
and how it is beyond anything humanity has ever faced before.
The biggest social networks in the world, they're effectively going to go for free speech.
What is your take on that?
The issue is not the humans. The issue is the algorithms.
So let me unpack this.
In the 2010s, there was a big battle between algorithms for human attention.
Now the algorithms discovered, when you look at history,
the easiest way to grab human attention is to press the fear button, the hate button, the greed button. The
problem is that there was a misalignment between the goal that was defined to the algorithm and
the interests of human society. But this is how it becomes really disconcerting. Because if so
much damage was done by giving the wrong goal to a primitive social media algorithm, what would be
the results with AI in 20 or 30 years? So what's the solution?
We've been in this situation many times before in history, and the answer is always the same,
which is...
Are you optimistic?
I try to be a realist.
This is a sentence I never thought I'd say in my life.
We've just hit 7 million subscribers on YouTube.
And I want to say a huge thank you to all of you that show up here every Monday and Thursday to watch our conversations
from the bottom of my heart, but also on behalf of my team who you don't always get to meet. There's
almost 50 people now behind the diary of a CEO that work to put this together. So from all of
us, thank you so much. We did a raffle last month. And we gave away prizes for people that subscribe
to the show up until 7 million subscribers. And you guys love that raffle so much. We did a raffle last month and we gave away prizes for people that subscribe to
the show up until 7 million subscribers. And you guys loved that raffle so much that we're going
to continue it. So every single month we're giving away Money Can't Buy prizes, including meetings
with me, invites to our events and £1,000 gift vouchers to anyone that subscribes to the Diary
of a CEO. There's now more than 7 million of you. So if you make the decision to subscribe today,
you can be one of those lucky people.
Thank you from the bottom of my heart.
Let's get to the conversation.
10 years ago, you made a video that was titled, Why Humans Run the World.
It's a very well-known TED talk that you did.
After reading your new book, Nexus, I wanted to ask you a slightly modified question which is
do you still believe that 10 years from now humans will fundamentally be running the world
i'm not sure it depends on the decisions we all take in the coming years but there is a chance
that the answer is no that in 10 years, algorithms and AIs will be running the world.
I'm not having in mind some kind of Hollywoodian science fiction scenario of one big computer kind of conquering the world.
It's more like a bureaucracy of AIs that we will have millions of AI bureaucrats everywhere, you know, in the banks, in the government, in businesses, in universities,
making more and more decisions about our lives, that every day decisions, whether to give us a
loan, whether to accept us to a job. And we will find it more and more difficult to understand
the logic, the rationale, why the algorithm refused to give us a loan,
why the algorithm accepted somebody else for the job. And, you know, you could still have
democracies with people voting for this president or this prime minister. But if most of the
decisions are made by AIs, and humans, including the politicians, have difficulty understanding the reason why the AIs are making a particular decision.
Then power will gradually shift from humanity to these new alien intelligences.
Alien intelligences?
Yeah, I prefer to think about AI. I know that the acronym is artificial
intelligence, but I think it's more accurate to think about it as an alien intelligence,
not in the sense of coming from outer space, in the sense that it makes decision in a fundamentally
different way than human minds. Artificial means or have the sense that we design it, we control it. Something artificial is made by
humans. With each passing year, AI is becoming less and less artificial and more and more alien.
Yes, we still design the kind of baby AIs, but then they learn and they change and they start making unexpected decisions and they start coming
up with new ideas which are an alien to the human way of doing things you know there is this famous
example with the game of go that in 2016 alpha go defeated the world champion l Lisa Dole. But the amazing thing about it was the way it did it.
Because humans have been playing Go
for 2,500 years.
A board game.
A board game, a strategy game
developed in ancient China
and considered one of the basic arts
that any cultivated, civilized person
in East Asia had to know.
And tens of millions of Chinese and Koreans and Japanese played Go for centuries.
Entire philosophies developed around the game of how to play it.
It was considered a good preparation for politics and for life.
And people thought that they explored the entire realm,
the entire geography landscape of Go.
And then AlphaGo came along and showed us that actually for 2,500 years,
people were exploring just a very small bit,
a very small part of the landscape of Go.
There are completely different strategies of how to play the game
that not a single human being came up with in more than 2000 years of playing it. And AlphaGo
came up with it in just a few days. So this is alien intelligence. And you know, if it's just
a game, but the same thing is likely to happen in finance, in medicine, in religion, for better or for worse.
You wrote this book Nexus. Nexus. How do you pronounce it?
Nexus.
Nexus.
I'm not an expert on pronunciations.
You could have written many a book. You're someone that's, I think, broadly curious about
the nature of life, but also the nature of history. For you to write a book. You're someone that's, I think, broadly curious about the nature of life,
but also the nature of history. For you to write a book that is so detailed and comprehensive,
there must have been a pretty strong reason why this book had to come from you now. And why is that? Because I think we need a historical perspective on the AI revolution. I mean,
there are many books about AI. Nexus is not a book about AI. It's a
book about the long-term history of information networks. I think that to understand what is
really new and important about AI, we need perspective of thousands of years to go back
and look at previous information revolutions, like the invention of writing and the printing press and the radio.
And only then you really start to understand
what is happening around us right now.
One thing you understand, for instance,
is that AI is really different.
People compare it to previous revolutions,
but it's different because it's the first technology ever in human history that is able to make decisions independently and to create new ideas independently.
A printing press could print my book, but it could not write it.
It could just copy my ideas. An atom bomb could destroy a city
but it can't decide by itself
which city to bomb
or why to bomb it.
And AI can do that.
And you know,
there is a lot of hype right now
around AI
so people get confused
because they now try to sell us
everything as AI.
Like you want to sell this table
to somebody?
Oh, it's an AI table.
And this water, this is AI water.
So people, what is AI?
Everything is AI.
No, not everything.
There is a lot of automation out there,
which is not AI.
If you think about a coffee machine
that makes coffee for you,
it does things automatically,
but it's not an AI.
It's pre-programmed by humans to do
certain things, and it can never learn or change by itself. A coffee machine becomes an AI. If you
come to the coffee machine in the morning, and the machine tells, hey, based on what I know about you,
I guess that you would like an espresso.
It learned something about you and it makes an independent decision.
It doesn't wait for you to ask for the espresso.
And it's really AI if it tells you,
and I just came up with a new drink,
it's called Buffy,
and I think you would like it.
That's really AI.
When it comes up with completely new ideas that we did not program into it and that we did not anticipate.
And this is a game changer in history.
It's bigger than the printing press.
It's bigger than the atom bomb.
You said we need to have a historical perspective in it.
Do you consider yourself to be a historian?
Yes, my profession is a historian.
This is my training.
I was originally a specialist
in medieval military history.
I wrote about the Crusades
and the Hundred Years' War
and the strategy and logistics
of the English armies that invaded France
in the 14th century.
This was my first articles.
And this is the kind of perspective or of knowledge that I also bring to try and understand what's happening
now with AI.
Because most people's understanding of what AI is comes from them playing around with
a large language model like ChachiPT or Gemini or Grok or something. That's like their understanding
of it. You can ask it a question and it gives you an answer. That's really what people think of AI as.
And so it's easy to be a bit complacent with it or to see this technological shift as being
trivial. But when you start talking about information and the disruption of the flow
of information and information networks, and when you bring it back through history and you give us this perspective on the fact that information effectively glues us all together, then it starts to become, for me, I think about it completely differently.
I mean, there are two ways I think about it.
I mean, one way is that when you realize that, as you said, that information is the basis for everything, when you start to shake the basis,
everything can collapse or change or something new could come up. For instance, democracies
are made possible only by information technology. Democracy, in essence, is a conversation,
a group of people conversing, talking, trying to make decisions together.
Dictatorship is that somebody dictates everything.
One person dictates everything, that's dictatorship.
Democracy is a conversation.
Now, in the Stone Age, hunter-gatherers living in small bands, they were mostly democratic.
Whenever the band needed to decide anything, they could just talk with each other and decide.
As human societies grew bigger,
it just became technically difficult to hold the conversation. So the only examples we have from the ancient world for democracies are small city-states like Athens or Republican Rome. These
are the two most famous examples, not the only ones, but the most famous. And even the ancients,
even philosophers like Plato and Aristotle, they knew once you go beyond the level of a city-state,
democracy is impossible. We do not know of a single example from the pre-modern world
of a large-scale democracy. Millions of people spread over a large territory, conducting their political
affairs democratically. Why? Not because of this or that dictator that took power, because democracy
was simply impossible. You cannot have a conversation between millions of people when you
don't have the right technology. The large scale democracy becomes possible only in the late modern era
when a couple of information technologies appear, first the newspaper, then telegraph and radio and
television, and they make large-scale democracy possible. So democracy, it's not like you have
democracy and on the side you have these information technologies. No, the basis of democracy is information technology.
So if you have some kind of earthquake in the information technology,
like the rise of social media or the rise of AI,
this is bound to shake democracy,
which is now what we see around the world
is that we have the most sophisticated information technology in history
and people can't talk with each other. The democratic conversation is breaking down. around the world is that we have the most sophisticated information technology in history,
and people can't talk with each other. The democratic conversation is breaking down.
And every country has its own explanation. Like you talk to Americans, what's happening there between Democrats and Republicans? Why can't they agree on even the most basic facts?
And they give you all these explanations about the unique conditions of American history and society.
But you see the same thing in Brazil.
You see the same thing in France, in the Philippines.
So it can't be the unique conditions of this or that country.
It's the underlying technological revolution.
And the other thing that history kind of, that I bring from history,
is how even relatively small technological changes,
seemingly small changes, can have far-reaching consequences.
Like you think about the invention of writing.
Originally, it was basically people playing with mud.
I mean, writing was invented for the first...
It was invented many times in many places,
but the first time in
ancient Mesopotamia, people take clay tablets, which is basically pieces of mud, and they take
a stick and they use the stick to make marks in the clay, in the mud. And this is the invention
of writing. And this had a profound effect. To give just one example, you think about ownership.
What does it mean to own something? Like I own a house, I own a field. So previously, before writing,
to own a field, if you live in a small Mesopotamian village, like 7,000 years ago ago you own a field this is a community affair it means that your neighbors
agree that this field is yours and they don't pick fruits there and they don't graze their
sheep there because they agree it's yours it's a community agreement then comes writing and you
have written documents and ownership changes its meaning now to own a field or a house means that there is some piece of dry mud somewhere in the archive of the king
with marks on it that says that you own that field.
So suddenly, ownership is not a matter of community agreement between the neighbors.
It's a matter of which document sits in the archive of the king.
And it also means, for instance, that you can sell your land to a stranger without the permission of your neighbors simply by giving the stranger this piece of dry mud in exchange for gold or silver or whatever. So what a big change,
a seemingly simple invention, like using a stick to draw some signs on a piece of mud.
And now think about what AI will do to ownership. Like maybe 10 years down the line to own your
house means that some AI says that you own it.
And if the AI suddenly says that you don't own it, for whatever reason that you don't even know,
that's it. It's not yours. That mark on that piece of mud was
also the invention of sort of written language. And I think, I was thinking about when I was
reading your book about how language holds our society together, not in the way that we often might assume, as in me having a conversation with you, but passwords, poetry, like banking.
It's like our whole society is secured by language. that the AIs have mastered with large language models is the ability to replicate that,
which made me think about all the things
that in my life are actually held together
with language, even my relationships now,
because I don't see my friends.
My friends live in Dubai and America and Mexico.
So we conversate in a language.
Our relationships are held together in language.
And as you said, democracies are held together
in language.
And now there's
a more intelligent force
that's mastered that.
Yeah, it was so unexpected.
Like, you know, five years ago,
people said,
AI will master this or that,
self-driving vehicles.
But language?
Nah, this is such
a complicated problem.
This is the human masterpiece,
language.
It will never master language.
And Chad GPT came and he said, you know, I'm a words person. problem. This is the human masterpiece language. It will never master language.
And ChatGPT came and it said, you know, I'm
a words person
and I'm simply amazed
by the
quality of the texts
that
these large language
models produce. It's not perfect
but they really
understand the semantic field of words. They can
string words together and sentences to form a coherent text. That's really remarkable. And as
you said, I mean, this is the basis for everything. Like I give instructions to my bank with language if ai can generate text and audio and image then how do i communicate with the
bank in a way which is not open to manipulation by an ai but the tempting part in that sentence
is you don't like communicating with your bank anyway as in calling them being on the phone
waiting for another human.
So the temptation is, I don't like speaking to my bank anyway,
so I'm going to let the AIs do that.
I'm going to invest.
If I can trust them.
I mean, the big question is, I mean, why does the bank want me to call personally to make sure that it's really me?
It's not somebody else telling the bank, oh, make this transfer to, I don't know,
Cayman Islands.
It's really me.
And how do you make sure, how do you build this trust?
I mean, the whole of finance for thousands of years is just one question, trust.
All these financial devices, money itself is really just trust.
It's not made from gold or silver or paper or anything.
It's how do you create trust between strangers?
And therefore, most financial inventions,
in the end, they are linguistic and symbolic inventions.
You don't need some complicated physics.
It's complicated symbolism.
And now AI might start creating new financial devices
and will master finance because it mastered language.
And like you said,
we now communicate with other people,
our friends, all over the world.
In the 2010s,
there was a big battle
between algorithms for human attention.
We're just discussing it before the podcast.
How do we get the attention of people?
But there is something
even more powerful out there than attention,
and that's intimacy.
If you really want to influence people,
intimacy is more powerful than attention.
How are you defining intimacy in this regard?
Someone that you have a long-term acquaintance with,
that you know personally, that you trust, that to some extent that you love, that you have a long-term acquaintance with, that you know personally, that you trust,
that to some extent that you love, that you care about.
And until today, it was utterly impossible to fake intimacy
and to mass-produce intimacy.
You know, dictators could mass-produce attention.
You know, once you have, for instance, radio,
you can tell all the people in Nazi Germany or in the Soviet Union,
the great leader is giving a speech, everybody must turn their radio on and listen.
So you can mass produce attention.
But this is not intimacy.
You don't have intimacy with the great leader.
Now with AI, you can, for the first time in history,
at least theoretically, mass produce intimacy with millions of bots, maybe working for some government, faking intimate relationships with us, which will be hard to know that this is a bot and not a human being. it's interesting because when i i've had so many conversations with relationship experts and
a variety of people that speak to the decline in human to human intimacy and the rise in
loneliness and us becoming more um sexless as a society and all of these kinds of things so it's
it's almost with the decline in human-to-human intimacy
and human-to-human connection
and the rise of this sort of possibility
of artificial intimacy,
it begs the question what the future might look like
in a world where people are lonelier than ever,
more disconnected than ever,
but still have the same Maslowian need for that connection
and that feeling of love and belonging.
And maybe this is why we're seeing a rise in polarization
at the same time,
because people are desperately trying to belong somewhere
and the algorithm is reinforcing my echo chamber.
But I don't know how that ends.
I don't think it's deterministic.
It depends on the decision we make individually and as a society.
There are, of course, also wonderful things that this technology can do for us.
The ability of AI to hold a conversation and AI therapists that can give us better health care services, better education services than ever before. class of 40 other kids that the teacher is barely able to give attention to this particular child
and understand his or her specific needs and his or her specific personality. You can have an AI
tutor that is focused entirely on you and that is able to give you a quality of education which is
really unparalleled. I had this debate with my friend on the weekend.
He's got two young kids who are one years old and three years old. And we were discussing in the
future in sort of 16 years time, where would you rather send your child? Would you rather send your
child to be taught by a human in a classroom, as you've described with lots of people, lots of
noise where they're not getting personalized learning. So if the classroom are more intelligent,
they're being left behind. If they're more intelligent, they're being left behind.
If they're more intelligent, they're being dragged back.
Or would you rather your child sat in front of a screen, potentially, or a humanoid robot and was given really personalized, tailored education that was probably significantly cheaper than, say, private educational university?
You need the combination i mean i think that
the the for for many of the lessons it will be better to go with the ai tutor which you don't
even have to sit in in front of a screen you can go to the park and get a a lesson on on ecology
just listening on on as you walk but you will need need large groups of kids for break time. Because very often
you learn that the most important lessons in school are not learned during the lessons,
they are learned during the breaks. And this is something that should not be automated.
You would still need a large group of children together with human supervision for that.
The other thing I thought about a lot when I was reading your book is this idea that
I would assume us having more information and more access to information would lead
to more truth in the world, less conspiracy, more agreement.
But that doesn't seem to be the case.
No, not at all. Most information in the world is junk. I mean, I think the best way to think about
it is it's like with food. That there was a time, like a century ago in many countries,
where food was scarce. So people ate whatever they could get, especially it was full of fat
and sugar. And they thought that
more food is always good. Like if you asked your great grandmother, she would say, yes,
more food is always good. And then we reach a time of abundance in food. And we have all this
industrialized processed food, which is artificially full of fat and sugar and salt and whatever.
And it's obviously bad for us.
The idea that more food is always good, no.
And definitely not all this junk food.
And the same thing has happened with information.
That information was once scarce.
So if you could get your hands on a book, you would read it because it was nothing else.
And now information is abundant.
We are flooded by information. And much of it is junk information, which is artificially full of greed and anger and fear because of this battle for attention. And it's not good for us.
So we basically need to go on an information diet. Again, the first step is to
realize that it's not the case that more information is always good for us. We need a
limited amount, and we actually need more time to digest the information. And we have to be,
of course, also careful about the quality of what we take in, because, again, of the abundance of junk information.
And the basic misconception, I think,
is this link between information and truth.
That people think, okay, if I get a lot of information,
this is the raw material of truth,
and more information will mean more knowledge.
And that's not the case,
because even in nature,
most information
is not about the truth.
The basic function
of information in history
and also in biology
is to connect.
Information is connection.
And when you look at history,
you see that very often
the easiest way to connect people is not with the truth.
Because the truth is a costly and rare kind of information.
It's usually easier to connect people with fantasy, with fiction.
Why? Because the truth tends to be not just costly.
The truth tends to be complicated and it tends to be uncomfortable just costly, the truth tends to be complicated, and it tends to be uncomfortable
and sometimes painful. If you think, you know, like in politics, a politician who would tell
people the whole truth about their nation is unlikely to win the elections, because every
nation has this skeleton in the cupboard and all these dark sides and dark episodes that people don't want to be confronted with.
So we see that politically, it's not, if you want to connect nations, religions, political parties, you often do it with fictions and fantasies.
And fear.
I was thinking about sapiens
and the role that stories play
in engaging our brains.
And I was thinking a lot about the narratives.
In the UK, we have a narrative
where we're told that
much of the cause of the problems
we have in society,
unemployment,
other issues with crime,
are because there's people
crossing from France on boats.
And it's a very effective narrative
to get people to band together to march in the streets.
And in America, obviously,
the same narrative of the wall and the southern border,
they're crossing our border in the millions.
It's their rapists.
They're not sending their good people.
They're coming from mental institutions.
It has galvanized people together.
And those people are now like marching in the streets
and voting based on that story
that is a fearful story it's a very powerful story because it connects to something very deep
inside us uh and if you want to get people's attention if you want to get people's engagement
so the fear button is one of the most efficient most effective buttons to press in the human mind.
And again, it goes back to the Stone Age.
So if you live in a Stone Age tribe, one of your biggest worries is that the people from
the other tribe will come to your territory and will take your food or will kill you.
So this is a very ingrained fear,
not just in humans,
in every social animal.
They did experiments on chimpanzees
that show that chimpanzees
have also a kind of almost instinctive
fear or disgust
towards foreign chimpanzees
from a different band.
And politicians and religious leaders,
and they learn how to play foreign chimpanzees from a different band. And politicians and religious leaders,
they learn how to play on these human emotions almost like you're playing a piano.
Now, originally, these feelings like disgust,
they evolved in order to help us.
You know, on the most basic level,
disgust is there because
you know especially as a kid
you want to experiment with
different foods but if you eat something
that is bad for you you need to
you know puke it
you need to throw it out
so you have disgust protecting
you but then
you have religious and
political leaders throughout history hijacking this
defensive mechanism and teaching people from a very young age to not just to fear, but to be
disgusted by foreign people, by people who look different. And this is, again, as an adult,
you can learn all the theories and you can educate yourself that this is not true, but still very deep in your mind.
If there is a part that is just, these people are disgusting, these people are dangerous. how many different movements have learned how to use these emotional mechanisms to motivate people.
We sit down at a very interesting time, Yuval,
because two quite significant things have happened in the last, I think, year
as it relates to information and many of the things we've been talking about.
One of them
is elon musk bought twitter and his real mandate has been this idea of free speech
and as part of that mandate he's unblocked a number of figures who were previously blocked
on twitter um a lot of them right-leaning people that were blocked for a variety of different
reasons and then also this week mark zuckerberg released basically a letter publicly
and in that letter he says that he regrets the fact that he cooperated so much with the fbi the
government when they asked him to censor things on facebook one particular story he says he regrets
doing that and it looks like if you read between what he's saying well he actually says explicitly
he says we're going to push back harder in the future
if governments or anybody else asks us to censor certain messaging.
Now, what I'm seeing is that Twitter,
which is one of the biggest social networks in the world,
and Meta, the biggest social network in the world,
have now taken this stance,
and effectively they're going to let information flow.
They're effectively going to go for this free speech narrative.
Now, as someone that's used these platforms for a long time specifically x or twitter it is crazy how
different it is these days there are things that i see every time i scroll that i never would have
seen before this free speech um position now i'm not taking a stance whether it's good or bad
it's just very interesting and there's clearly an algorithm that is now really like if i scroll if i go on x right now i will
see someone being killed with a knife i reckon within 30 seconds and i will see someone getting
hit by a car um i will see extreme islamophobia potentially um but then i'll also see the other
side so it's not just something i'll see all of
the sides and when you're talking earlier about like is that good for me i had a flashback to
my friend this weekend it was my birthday so my me and my friends were together just looking over
at him mindlessly scrolling these like horror videos on twitter as he was sat on my left
thinking god he's like frying his dopamine receptors and i just i just think this whole
like free speech movement what is your take on this whole new free speech movement, what is your take on
this idea of free speech in the world? Only humans have free speech. Bots don't have free speech.
The tech companies are constantly confusing us about this issue because the issue is not the
humans. The issue is the algorithms. And let me explain what I mean. If the question is
whether to ban somebody like Donald Trump from Twitter, I agree this is a very difficult issue,
and we should be extremely careful about banning human beings, especially important politicians,
from voicing their views and opinions. However much we dislike their opinions or them personally, it's a very serious
matter to ban any human being from a platform. But this is not the problem. The problem on the
platform is not the human users. The problem is the algorithms and the companies constantly
shift the blame to the humans in order to protect their business interests. So let me unpack this.
Humans create a lot of content all the time. They create hateful content, they create sermons on
compassion, they create cooking lessons, biology lessons, so many different things, a flood of
information. The big question is, then what gets human attention? Everybody wants attention. Now, the companies also want attention. The companies give the algorithms that run the social media platforms a very simple goal. Make people spend more time on Twitter, more time on Facebook, engage more, sending more likes and recommending it to their friends.
Why? Because the more time we spend on the platforms, the more money they make.
Very, very simple.
Now, the algorithms made a huge, huge discovery.
By experimenting on millions of human guinea pigs, the algorithms discovered that if you want to grab human attention,
the easiest way to do it is to press the fear button, the hate button, the greed button.
And they started recommending to users to watch more and more content full of hate and fear and greed
to keep them glued to the screen.
And this is the deep cause of the epidemic of fake news and conspiracy theories and so forth.
And the defense of the companies is we are not producing the content. Somebody, a human being produced a hate-filled conspiracy theory about immigrants
and it's not us
it's like
a bit like, I don't know, the chief
editor of the New York Times
publishing a hate-filled conspiracy theory
on the front of the first page
of the newspaper and when you
ask him why did you do it
or are you blaming, look what you did
he says, I didn't do anything.
I didn't write the piece.
I just put it on the front of the New York Times.
That's all.
That's nothing.
It's not nothing.
People are producing immense amount of content.
The algorithms are the kingmakers.
They are the editors now.
They decide what gets viewed.
Sometimes they just recommend it to you.
Sometimes they actually autoplay it to you. Like you chose to watch some video. At the end of the
video, to keep you glued to the screen, the algorithm immediately, without you telling him,
it, the algorithm, without you telling the algorithm, the algorithm auto plays some
kind of video full of fear
or greed just to keep you glued
to the screen. It is the algorithm doing
it. And this should be
banned or this should at least be
supervised and regulated.
And this is not freedom of speech
because the algorithms
don't have freedom of speech.
Yet the person who produced the hate-filled video,
I would be careful about banning them.
But that's not the problem.
It's the recommendation which is the problem.
The second problem is that a lot of the conversations now online
are being overrun by bots. Again, if you look, for instance,
at Twitter X as an example, so people often want to know what is trending, which stories get the
most attention. If everybody's interested in a particular story, I also want to know what
everybody's talking about. And very often, it's the bots that are driving
the conversation. Because a particular story initially gets a lot of traction, a lot of
traffic, because a lot of bots retweet it. And then people see it and think, they don't know
it's bots. They think it's humans. So they say, oh, lots of humans are interested in this. So I
also want to know what's happening.
And this draws more attention.
This should be forbidden.
Bots are very
basically, you cannot have
AIs pretending to be
human beings. This is fake humans.
This is counterfeit humans.
If you see activity online and
you think it's human activity, but
actually it's bot activity, this would be banned.
And it doesn't harm the free speech of any human being because it's a bot.
It doesn't have freedom of speech.
I was thinking a lot about what you said about these algorithms are actually running running the world and i mean yeah so if the algorithms are
deciding what i see based on what i spend my time looking at because they want to make you know the
platforms want to make more money and if i have a innate sort of predisposition to spend more time
focused on things that scare me yeah or then you just have to give me a couple of years. And every year that goes past, I'll become more fearful.
It reinforces your own weaknesses.
It's like the food industry.
So the food industry discovered we liked food with a lot of salt and fat in it and gives it more to us.
And then it says, but this is what the customers want.
What do you want from us? It's the same thing, but even worse with these algorithms that because this is the food for the mind.
Yes, humans have a tendency that if something is very frightening or something fills them with anger, they focus on it and they tell all the friends about it.
But to artificially amplify it, it's just not good for our mental health and social health.
It is using our own weaknesses against us instead of helping us deal with them.
Is it fair to say, now this is me just jumping to conclusions a little bit, but is it fair to say
that in a world where you remove restrictions around blocking certain characters right-wing characters
that are their messages maybe based on immigration etc you remove those restrictions so they're all
allowed on every platform and then you program the algorithm to be focused on revenue that
eventually more people will become right-wing and i say that in part because it's it's a right wing narrative to say that immigrants
are bad and that you know i'm not saying that the left are innocent because they're absolutely not
but i'm saying that the fearful narratives the fear seems to come more from the right in my
opinion like especially in the uk it was the fear comes from immigrants and these people are going to take your money and all these kinds of things.
I think the key issue is to not to label it as a right or left issue, because, again, democracy is a conversation.
And you can have a conversation only if you have several different opinions.
And it's I think it should be OK to have a conversation about immigration immigration that people should be able to have different opinions about it
that's fine
the problem starts when
one side
vilifies
and demonizes
anybody
who doesn't think
and you see it to some extent from both sides
but in the case of immigration so you would have these conspiracy theories that anybody who supports immigration, for instance, they want to destroy the country.
They are part of this conspiracy to flood the country with immigrants and to change its nature and whatever. ever. And this is the problem, that democracy, once you believe that people who don't think like you,
they are not just your political rivals, they are your enemies. They are out to destroy you.
They intend to destroy your way of life, your group. Then democracy collapses,
because there can be no way. Between enemies,
democracy doesn't work. It works if you think that the other side is wrong, but there are still
essentially good people who care about the country, who care about me, but they have different
opinions. If you think that they are my enemies, they try to destroy me, then the election becomes like a war
because you're fighting for your survival. You will do anything to win the election because
your survival is at stake. If you lose, you have no incentive to accept the verdict. If you win,
you only take care of your tribe and not of the enemy tribe.
What if you don't believe the election is legitimate?
Then democracy can't function.
This is, again, the basic...
Democracy can't exist in just any...
It's like a delicate plant that needs certain conditions in order to survive and to flourish.
And one condition, for instance, is that you have information
technologies that allows a conversation. Another condition is that you trust the institutions.
If you don't trust the institution of elections, it doesn't work. And a third condition is that
you need to think that the people on the other side of the political divide, they are my rivals, but they are not my
enemies. Now, the problem with what's happening now with democratic conversations is because of
this tendency to go to more and more and more extremes, it creates the impression that the
other side is an enemy. And this is a problem not just for the right, also for the left.
That on both sides, you see this feeling that the other side is an enemy, and that its positions
are completely illegitimate. And if we reach that point, then the conversation collapses.
And it should be possible to have complex conversations
and discussions about difficult issues like immigration, like gender, like climate change,
without seeing the other side as an enemy, which was possible for generations. So why is it that
now it seems to just become impossible to talk with the other side
or to agree about anything
we have a big election
in the United States this year
very big one yeah
do you think a lot about it
yes yes
I mean it seems like a very
it would be a coin toss
I mean like 50-50
you know elections become really an existential issue if there is
a chance they will be the last elections. If one side is intense to simply change the rules of the
game, if it comes to power, then it becomes existential. Because again, democracy works
on the basis of self-correcting mechanisms. This is the big advantage of democracy over
dictatorship. In a dictatorship, a dictator can make a lot of good decisions, but sooner or later,
they will make a bad decision. And there is no mechanism in a dictatorship to identify and correct such mistakes.
Like Putin.
Yeah.
There is just no mechanism in Russia that could say Putin made a mistake.
He should go.
He should let somebody else try a different course of action.
This is the great advantage of democracy.
You try something.
It doesn't work.
You try something, it doesn't work, you try something else. But the big problem is, what if you this. And now in the last elections a couple of weeks ago, evidence is very, very clear that
Maduro lost big time, but he controls everything, the election committee, everything. And he claims,
no, I won. And they destroyed Venezuela. You know, it's something like a quarter of the population fled the country,
which was one of the richest countries in South America before,
and they just can't get rid of the guy.
Surely that will never happen in the West.
Oh, don't say never in history.
History can catch up with you, whoever you are.
That's one of the illusions we kind of...
Venezuela was part of the West, in many ways still is.
This is one of the illusions we live under, though.
We think that that can never happen to the UK or the United States or Canada,
these sort of, quote unquote, civilized nations.
You know, according to some measurements,
democracy in the United States is quite new and quite fragile. if you think about it in terms of who gets to vote, for instance. 20% chance that a Trump administration would change the rules of the game of American democracy
in such a way as to make it, for instance, by changing the rules about who votes or how
do you count votes, that it will become almost impossible to get rid of them.
That's not out of the possible
in historical terms.
Do you think it's possible
that Trump will do that?
Yes.
I mean, you saw it on the 6th of January.
I mean, the most sensitive moment
in every democracy
is the moment of transfer of power.
And the magic of democracy
is that democracy is meant to ensure
a peaceful transfer of power.
But as I said, like you choose one party, you give them a try.
After some time, if people say they didn't do a good job, let's try somebody else.
And, you know, we have people who hold in the United States, they hold the biggest power in the world.
The president of the United States have enough power to destroy human civilization.
All these nuclear missiles,
all this arming,
and he loses the election.
And he says, okay,
I give up all this power
and I let the other guy try.
This is amazing.
And this is exactly what Trump didn't do.
He, from the beginning,
I mean, even from 2016, he refused.
They asked him directly, if you lose the election, will you accept the results?
And he said no.
And in 2020, he did not hand power peacefully.
He tried to prevent it.
And the fact that he's now running again and I think to some extent
the lesson he got from the 6th of January
is that I can basically get away with anything
at least with my people, with my base
that it was like a test, a try
if I do this extreme thing
and they still support me afterwards
it basically means they will support me no matter
what i do i'm wondering in a world of um such a fragile democracy when information flows and
networks are disrupted by something like ai if misinformation and disinformation and the ability
for me to make a video i could make a video right now of donald trump speaking and saying something in his voice um and i could help that video go viral like how
do you hold together democracy and communication when you don't believe anything that you're seeing
online and we're just at the start of this now so we haven't seen anything yet this is just really
the the first baby steps i'm gonna play a video on the screen right now so people can see. And for those listening,
you'll just hear it. I'm going to play a video that Isaac over there in the corner of the room
made of me speaking in this chair. And it wasn't me. And I didn't say it. And I wasn't in this
chair. Hey there, this is AI Steve. Do you think I'll be able to take over the diary of a CEO one
day? Leave your comments below and it
sounds exactly like me identical and it's not me and i wonder this with you know most of us get
our political information and our information generally now from social media yeah from and
if i can't believe anything that i'm seeing because it's all easy to make some kid in russia
in their bedroom can make a video of the prime minister here. I don't know where we get our information
from anymore, how we verify. The answer is institutions. We've been in this situation
many times before in history, and the answer is always the same, institutions. You cannot trust
the technology. You trust the institution that verifies the information. Think about it like with print, that you can write on a piece of paper anything you
want. You can write the Prime Minister of Britain said, and then you open quotation marks, and you
put something into the mouth of the Prime Minister. You can write anything you want. And when people
read it, they don't believe it, or they shouldn't believe it. Just because it's written that the
prime minister said it, doesn't mean that it's true. So how do we know which pieces of paper to
believe as an institution? We would believe, or greater chance we will believe, if on the front
page of the New York Times, or of the Sunday Times, or of the Guardian, you will have the British prime minister said,
open quotation marks, blah, blah, blah.
Because we don't trust the paper or the ink.
We trust the institution of the Guardian
or the Wall Street Journal or whatever.
With videos, we never had to do that
because nobody could fake them.
So we trusted the technology.
If we saw a video, we said, this has to be true.
But when it becomes very easy to fake videos,
then we revert to the same principle as with print.
We need an institution to verify it.
If we see the video on the official website of CNN
or of the Wall Street Journal, then we believe it
because we believe the institution backing it.
And if it's just something on TikTok,
we know that, you know, any kid can do that.
Why should I believe it?
So now we are in the transition period.
We are still not used to it.
So when we see a video of Donald Trump or Joe Biden,
the video still gets to us because we grew up in a time when it was impossible to fake it.
But I think very quickly people will realize you can't trust videos.
You can only trust the institutions.
And the question is, will we be able to produce, to create, to maintain trustworthy institutions fast enough to save
the democratic conversation? Because if not, if you can't believe anything, this is the ideal for
dictators. When you can't trust anything, the only system that works is a dictatorship. Because
democracy works on trust, but dictatorship works on terror,
on fear. You don't need to trust anything in a dictatorship. You don't trust anything. You fear.
For democracy to work, you need to trust, for instance, that some information is reliable,
that the election committee is impartial, that the courts are just, and
if more and more institutions
are attacked and people
lose trust in them, then
democracy collapses.
Going back to
information, so one
option is that the old institutions
like newspapers
and TV stations,
they will be the institutions that we trust
to verify certain videos,
or we will see the emergence of new institutions.
And again, the big question is
whether we'll be able to develop trust in them.
And I specifically say institutions and not individuals.
No large-scale society, especially not a democratic
society, can function without trustworthy bureaucratic institutions.
And will those bureaucratic institutions be AI?
That's the big question, because increasingly, AIs will be the bureaucrats.
What do you mean by bureaucrats?
What's the word bureaucrat?
What does that mean?
Oh, that's a very important question, because human civilization runs on bureaucracy.
Bureaucrats are essentially officials in government that try-
Not just in government.
I mean, the origin of the word bureaucrat, it comes from French from the 18th century. And bureaucracy means that the rule of the writing desk is to rule the world or to rule society with pen and papers and documents.
Like the example we gave in the very beginning about ownership.
So you own a house because there is a document in some archive that says that you own it.
And a bureaucrat produced this document.
And if you now need to retrieve it, then this is the job of a bureaucrat to find the right document at the right time.
And all big systems run on it.
Hospitals and schools and corporations and banks and sports associations and libraries,
they all run on these documents and the bureaucrats who know how to read and write and find and file documents.
One of our big problems is that it's difficult for us to understand bureaucratic systems
because they are a very recent development in human evolution.
And this makes us suspicious about them.
And we tend to believe all kinds of conspiracy theories about the deep state and about what's
going on in all these bureaucracies. And it's
really complicated, and it's going to be more complicated as more of the decisions will be made
by AI bureaucrats. An AI bureaucrat means that decisions like how much money to allocate to a
particular issue will no longer be made by a human official. It will be made by an algorithm.
And when people ask why is the switch system broken, why didn't they give enough money to
fix it? I don't know. The algorithm just decided to give the money to something else.
Why will bureaucracies be run by AI over people? Why will, at some point, a nation decide
that, in fact, AI is better at making these decisions? First of all, it's not a future
development. It's already happening. More and more of the decisions are being made by AIs.
And this is just because the amount of information you need to take into account are enormous.
And it's very difficult for humans to do it. It's much easier for the AIs to do it.
All these people, you know, bureaucrats, lawyers, accountants, it sounds like,
I always wonder, you know, what are humans going to be left to do? In your your book you say that ai is going to far ai is going so far beyond human intelligence
that it should actually be referred to alien intelligence and if it goes so far beyond human
intelligence it's my assumption that most of the work that we do is based on intelligence so even
like me doing this podcast now this is me asking questions based on information
that i've gathered based on what i think i'm interested in but also based on what i think
the audience will be interested in and compared to ai i'm like a little monkey like you know what
i mean if an ai has an iq that is a hundred times mine and a source of information that is a million
times bigger than mine there's no need for me to do this podcast i can get an ai to do it and in fact now i can talk to an ai and deliver that
information to a human but then if we look at most industries like being a lawyer um accountancy i
mean a lot of the medical profession is based on information yeah um driving but i think that's
the biggest employer in the world is the profession of driving, whether it's delivery or Uber or whatever it is.
Where do humans belong in this complex?
Anything which is just information in, information out is ripe for automation.
These are the easiest jobs to automate.
Like being a coder?
Like being a coder or like being an accountant. At least certain types of accountants, lawyers, doctors, they are the easiest to automate.
If a doctor, the only thing they do is just take information in, all kinds of results of blood tests and whatever.
And they information out, they diagnose the disease and they write a prescription.
This will be easy to automate in the coming years and decades.
But a lot of jobs, they require also social skills and motor skills.
If your job requires a combination of skills from several different fields,
it's not impossible, but it's much more difficult to automate it. So if you think about
a nurse that needs to replace a bandage to a crying child, this is much, much harder to automate
than just a doctor that writes a prescription. Because this is not just data. The nurse needs
good social skills to interact with the child and motor skills to just replace the bandage.
So this is harder to automate. And even for people who just deal with information,
there will be new jobs. The problem will be the retraining. And not just retraining in terms of
acquiring new skills, but psychological retraining. How do you kind of
reinvent yourself in a new profession and do it not once, but again and again and again,
because as the AI revolution unfolds, and we are just at the very beginning of it,
we haven't seen anything yet. So there will be old jobs disappearing, new jobs emerging, but the new jobs will rapidly
change and vanish.
And then there'll be a new wave of new jobs.
And people will have to reinvent themselves four, five, six times to stay relevant.
And this will create immense psychological stress.
So many of the big companies are also working at the same time on humanoid robots.
There's this humanoid robot race going on and by humanoid robots i mean you know tesla have
their humanoid robot i think it's called optimus which they're developing and it'll cost you know
x thousands of pounds and i watched a video of it recently where it can do quite delicate
sort of motor skill based stuff so probably clean the house it can probably work on the production line you can probably put things in boxes um and i just wonder when we say you know people are going to lose
their jobs in a world where you have humanoid robots and you have intelligence that's beyond us
and you combine the two where these humanoid robots are very very intelligent like i don't
know what i'm like where did the unemployed go to, to find these new professions?
Like obviously it's difficult to forecast the new professions of the future.
History tells us that.
But I can't figure out what the new professions are.
I mean, my girlfriend does breath work.
I guess the breath work part is quite easy to disrupt,
but then she takes women away for retreats
in Portugal and stuff.
So I'm like, okay, she's going to kind of be safe
because these women are going there to connect with humans
and to be in this little special place offline intentionally.
So retreats, she'll probably be fine.
Anything that, you know, there are things that we want in life
which are not just about solving problems.
Like I'm sick, I want to be healthy.
I want my problem solved.
But there are many things that we want to have a connection.
Like if you think about sports,
robots or machines can run much faster than people.
For a very long time now, and we just had the Olympics,
and people are not very interested in seeing robots running against each other or against people.
Because what really makes sports interesting in the end is the human weaknesses and the ability of humans to deal with their weaknesses.
And human athletes still have jobs, even though in many lines like running you can have a machine
run much faster than the world champion i thought about this the other day and uh another example
is priests like one of the easiest jobs to automate is the priesthood of at least certain
religions because you just need to repeat the same texts and gestures again and again
in specific situations. Like if you have a wedding ceremony,
then the priest just needs to repeat the same words and there you are, you're married.
Now, we don't think about priests as being in danger of being replaced by robots
because what we want from a priest is not just the mechanical repetition of certain
words and gestures.
We think that only another frail flesh and blood human who knows what is pain and love
and who can suffer, only they can connect us to the divine.
So most people would not be interested
in having the wedding conducted by a robot,
even though technically it's very easy to do it.
Now, the big question, of course, what happens
if AI gains consciousness?
This is like the trillion dollar question
of AI consciousness.
Then all bets are off. But that's a different
and very, very big discussion. I mean, whether it's possible, how would we know, and so forth.
Do you think it's possible?
We have no idea. I mean, we don't understand what consciousness is. We don't know how it emerges
in the organic brain. So we don't know if there is an essential connection
between consciousness and organic biochemistry
so that it can't arise in an inorganic silicon-based computer.
There is a big confusion, first of all,
should be said again, between consciousness and intelligence.
Intelligence is the ability to reach goals and
solve problems. Consciousness is the ability to feel things like pain and pleasure and love and
hate. Humans and other animals, we solve problems through our feelings. Our feelings are not something on the side.
They are a main method for how to deal with the world,
how to solve problems.
Now, so far, computers,
they solve problems
in a completely different way than humans.
Again, they are alien intelligence.
They don't have any feelings.
When they win a game of chess,
they are not joyful. When they lose a game, they are not sad. They don't know whether an inorganic structure based on silicon and not carbon,
whether it will be able to generate such things or not.
That's, I think, the biggest question in science.
And so far, we have no answer.
Isn't consciousness just like a hallucination?
Isn't it just like an illusion that i think i'm conscious because
i've got a circuitry which tells me that i am effectively it tells me through a bunch of like
feelings and things that i'm conscious like i think i'm looking at you now i think i can see you
the feeling is real i mean even if we are all it's like the matrix and we are all in
how do you know it's real it's the only real thing in the world. I mean, there is nothing and everything else
is just conjunction.
We only experience our
own feelings. What we see,
what we smell,
what we touch, this we actually
experience, this is real.
Then we have all these theories about
why do I feel pain? Oh, it's because
I stepped on a nail. There is such a thing
in the world as a nail and whatever. It could be that we are all inside a big computer on the planet Zircon run
by super intelligent mice. If I spoke to an AI, I could get an AI to tell me that it feels pain
and sadness. That's a big problem because there is a huge incentive to train AIs to pretend to be alive, to pretend to have feelings.
And we see that there is a huge effort to produce such AIs. And in truth, because we don't understand
consciousness, we don't have any proof, even that other humans have feelings. I feel my own feelings, but I never feel your feelings.
I only assume that
you're also a conscious being.
And society grants
a status
of a conscious
entity to
not only to humans, but also to some
animals, not based on any
scientific proof, but
based on social convention. Like most people feel
that their dogs are conscious, that their dogs can feel pain and pleasure and love and so forth.
So society accepts, most societies, that dogs are sentient beings and they have some rights
under the law. Now, even if AI has no feelings,
no consciousness,
no sentience whatsoever,
but it becomes very good
at pretending to have feelings
and convincing us that it has feelings,
then this will become a social convention
that people will feel
that their AI friend is a conscious being and therefore should be granted rights.
And there is even already a legal path for how to do it.
At least in the United States, you don't need to be a human being in order to be a legal person.
It's funny because you mentioned, you kind of alluded to the fact jokingly that we might
just be in like a simulation it was one of you like well maybe we're just in a simulation but
could be and it's funny because in a world of ai i think my belief in that as a possibility
has only increased yes this is in fact just a simulation because i've watched us go from when
i was born not really having internet access to now being able to kind of
speak to this alien on my computer that can like now do things for me and having virtual reality
experiences which are sometimes quite indistinguishable where my my you know i fall into
the trap of believing that i am inside squid games because i've got this headset on and you play it
forward and you play it forward you play it forward and you imagine any rate of improvement
then i hear the the arguments for simulation theory and i go do you know probably if you play it forward and you play it forward and you play it forward and you imagine any rate of improvement then i hear the the arguments for simulation theory and i go do you know probably
if you play this forward a hundred years you know like at the rate we're on the rate of trajectory
we're on then we will be able to create information networks and organisms that don't
in like a laboratory or in a computer,
that don't necessarily realize they're in the computer.
Especially with like what's going on with bio...
It's already happening.
To some extent, you know, these information bubbles
that more and more people live inside them,
it's still not the whole physical world.
But you get the same event
and people on, say, different parts
of the political spectrum,
they just can't agree on anything.
They live in their own metrics.
And, you know,
when the internet came along for the first time,
the main metaphor was the web,
the worldwide web.
A web is something that connects everything.
And now the main metaphor, which is this simulation theory, is representing this new metaphor.
The new metaphor is the cocoon.
It's a web that turns on you and encloses you from all sides so you can no longer see anything outside.
And there could be other cocoons with other people in there and you have no way to get to them.
Nothing that happens in the world can connect you anymore because you're in different cocoons.
You've only got to look at someone else's phone.
You've only got to look at someone else's Twitter or X or Instagram.
Is this the same reality?
It is so different.
Do you know what I was talking about over the weekend?
My friend was sat to my left scrolling.
He clicked on the discovery section, which is where you find new content.
I looked down at his phone and was like, it's all Liverpool Football Club.
It's like the entire feed is
liverpool and my entire feed is completely different and i was just thinking wow he lives
in a completely different world to me because he's a liverpool fan i'm a manchester united fan
and and it's yeah just and to think about that like to think that when you open your phone and
many of us are spending up to nine hours a day on our mobile phones, you're experiencing a completely different window into a completely different world than I am.
And this is a very ancient fear, because, for instance, Plato wrote exactly about that
in the most famous parable, I think, from Greek philosophy, is the allegory of the cave in which Plato imagines a theoretical scenario,
an imaginary scenario of a group of prisoners chained inside a cave with their face to a blank
wall in which shadows are being projected from behind them. And they mistake the shadows for
reality. And he was basically describing, you know, people in front of a screen,
just mistaking the screen for reality.
And you have the same thing in ancient India with Buddhists and Hindu sages
talking about Maya, which is the world of illusions.
And the deep fear that maybe we are all trapped inside a world of illusions and the deep fear that maybe we are all trapped inside a world of
illusions that the most important thing that we think in the world, the wars we fight,
we fight wars over illusions in our mind. And this is now becoming technically possible. Like previously, it was these philosophical
thought experiments. Now, part of what is interesting as a historian about the present era
is that a lot of ancient philosophical problems and discussions are becoming technical issues that yes you can suddenly realize plato's
cave in your phone so scary i find it really scary because you're right like i think right now
some people might say that they have some kind of grasp over like the ranking system or
why something shows up when i search it or whatever but as these
intelligence aliens become more and more powerful um it's of course we would have less understanding
because we're like handing over the decision making in some industries they are now completely
the king makers like i'm here on a book tour i wrote nexus so i go from podcast to podcast, from TV station to TV station to talk about my book.
But the entities I'm really trying to impress are the algorithms.
Because if I can get the attention of the algorithms, the humans will follow.
The humans.
Yuck.
You know, that's how we are.
We are basically kind of carbon creatures in a silicon world.
I used to think we were in control, though.
And now I feel like the silicon's in control.
Control is shifting.
We are still in control to some extent.
We are still making the most important decisions, but not for long. And this is
why we have to be very, very careful about the decision we make in the next few years. Because
in 10 years, in 20 years, it could be too late. By then, the algorithms will be making the most
important decisions. You talk about a couple of big dangers you see with the algorithms
and AI and the sort of shift and disruption of information. One of them is this alignment
problem, which, how would you explain the alignment problem to me in a way that's simple to understand?
So the classical kind of example is a thought experiment invented by the philosopher Nick Bostrom in 2014, which sounds crazy, but,
you know, bear with it. He imagines a super intelligent AI computer, which is bought by a
paperclip factory. And the paperclip manager tells the AI, your goal, the reason I bought you, your goal, your entire existence,
you're here to produce as many paperclips as possible. That's your goal. And then the AI
conquers the entire world, kills all humans, and turns the entire planet into factories for
producing paperclips. And it even begins to send expeditions to outer space
to turn the entire galaxy into just paperclip production industry.
And the point of the thought experiment is that the AI did exactly what it was told.
It did not rebel against the humans.
It did exactly what the boss wanted.
But of course, the strategy it chose
was not aligned with the real intentions, with the real interests of the human factory manager,
who just couldn't foresee that this will be the result. Now, this sounds like outlandish and ridiculous and crazy, but it already happened
to some extent, and we talked about it. This is the whole problem with social media and user
engagement. In the very same years that Nick Bostrom came up with this thought experiment in
2014, the managers of Facebook and YouTube, they told their algorithms, your goal is to increase
user engagement. And the algorithms of social media, they conquered the world and turned the
whole world into user engagement, which was what they were told to do. We are now very, very engaged.
And again, they discovered that the way to do it is with outrage
and with fear and with conspiracy theories. And this is the alignment problem. When Mark Zuckerberg
told the Facebook algorithms, increase user engagement, he did not foresee and he did not wish
that the result will be collapse of democracies, wave of conspiracy theories and fake news, hatred of minorities.
He did not intend it. the way that the algorithm, the goal that was defined to the algorithm, and the interests
of human society, and even of the human managers of the companies that deployed these algorithms.
And this is still a small-scale disaster. Because the social media algorithms that created all this social chaos over the last 10 years,
they are very, very primitive AI.
This is like the amoebas of, if you think about the development of AI as an evolutionary
process, for this is still the amoeba stage.
The amoeba being the very simple...
The very simple life forms, the beginning, like the single cell life form.
We are still in evolutionary terms, organic evolution.
We are like billions of years before we will see the dinosaurs and the mammals or the humans.
But digital evolution is billions of times faster than organic evolution.
So the distance between an AI amoeba and the AI dinosaurs
could be covered in just a few decades.
If chat GPT is the amoeba, how would the AI Tyrannosaurus Rex would look like? And this is where the
alignment problem becomes really disconcerting. Because if so much damage was done by giving
kind of the wrong goal to a primitive social media algorithm, what would be the results of giving a misaligned goal to a T-Rex
AI in 20 or 30 years? The issue at the heart of this is, you know, some people might think,
okay, just give it a different goal. But when you're dealing with private companies who are
listed on the stock market, there really is only one goal that keep that. Make money. Exactly. Benefits of survival. So one of the platforms have to say,
you know, the goal of this platform is to make more money and to get more attention.
Because also it's mathematically easy. And there is a huge, huge problem in how to define
for AIs and algorithms the goal in a way they can understand. Now, the great thing about make money
or increased user engagement is that it's very easy to measure it mathematically.
One day, you have a million hours being watched on YouTube. Then next, a year later, it's 2 million.
Very easy for the algorithm to see, hey, I'm making progress.
But let's say that Facebook would have told its algorithm, increase user engagement in a way
that doesn't undermine democracies. How do I measure that? Who knows what is the definition
for the robustness of democracy? Nobody knows. So defining the goal for the
algorithm as increased user engagement, but don't harm democracy, almost impossible.
This is why they go for the kind of easy goals, which are the most dangerous.
But even in that scenario, if I told, if I'm the owner of a social network, and I say,
increase user engagement, but don't harm democracy, the problem I have is my competitor, who leaves out the second part and just says, increase user engagement, is going to beat me.
Because they're going to have more users, more eyeballs, more revenue, advertisers are going to be happier, then my company is going to falter, investors are going to pull out.
That's a question, because there are two things to take into consideration.
First of all, you have governments.
Governments can regulate and they can penalize a social media company
that defines goals in a socially responsible way.
Just as they penalize newspapers or TV stations or car companies
that behave in an antisocial way.
The other thing is that humans are not stupid and self-destructive.
That if we would like to have better products in the sense of also socially better products.
And I gave earlier the example with food diets.
Think how much, yes, the food companies,
they discovered that if they fill a product artificially
with lots of fat and sugar and salt, people would like it.
But people discovered that this is bad for their health.
So you now have, like for instance a huge market
for diet products and people are becoming very aware of what they eat the same thing can happen
in the information market the cost though is like 80 70 80 percent of people in the u.s have like
chronic disease and are obese and you know life expectancy is now looks like it's going the other
way a little bit in the western world and and it's i don't know i just feel like with um with policing
consumption of goods like alcohol nicotine food seems much more simple than policing information
and the flow of information beyond you, beyond racism or like inciting violence.
I don't know how you police.
We already covered that the two most basic and powerful tools are to hold companies liable for the actions of their algorithms,
not for the content that the users produce, but for the actions of their algorithms, not for the content that the users produce,
but for the actions of the algorithms. I don't think we should penalize Twitter or Facebook.
If somebody posts a racist post, I would be very careful about penalizing Facebook for that,
because then who decides what is racism and so forth?
But if the algorithm of Facebook
deliberately spreads
some racist conspiracy theory,
that's the algorithm.
That's not human free speech.
How do you know it's a racist
conspiracy theory though?
Okay, so now we get
to the difficult conversation,
but this is something
that we have the courts for.
And I would be very,
very careful about having the quotes judge on the content of the production of individual users.
But when it comes to algorithms deliberately, routinely spreading a particular type of information, like a conspiracy theory, we can involve the courts.
The key issue is who has liability,
that it's the company that is liable
for what the algorithm is doing
and not the human individual
liable for what they are saying.
And another key distinction here
is between private and public.
Like part of the problem is the erasure of the boundary between the two.
I think that humans have a right to stupidity in private.
That in your private space with your friends and with your family, you have a right to stupidity.
You can say stupid things.
You can tell racist jokes. You can tell
homophobic jokes. It's not good. It's not nice. But you're a human being. You're allowed to do that.
But not in public. I mean, even for politicians, like as a gay person, if the prime minister tells
a homophobic joke in private, I don't need to care about that.
That's his or her business.
But if they say it in public on television,
that's a huge problem.
Now, traditionally, it was very easy to distinguish private from public.
You are in your private house with a group of friends.
You say something stupid, that's private.
It's nobody's business.
You go to the town square
and you stand on a pedestal and you shout something to thousands of people, that's public.
Here you can be punished if you say something racist or homophobic or outrageous. But it was
easy for you to know. Now the problem is you go, let's say, on WhatsApp, you think you're just talking with two of your friends and you say something really stupid, and then it gets viral and it's all over the place.
And I don't.
Even on the most basic thing of identifying yourself as a human being.
We don't want that everybody would have to get some certification from the government to talk with their friends on WhatsApp.
But if you have 100,000 followers online, we need to know that you are not a bot, that you're actually a human being.
And again, this is not covered by freedom of speech because bots don't have freedom of speech.
It's a slippery slope, right?
Because I've gone back and forth on this argument of anonymity
and whether it's a good thing or a bad thing for social networks.
And the rebuttal that I got when I lent to the side of IDing people
is that like totalitarian governments will use that as a way
to basically punish the people who are speaking.
The totalitarian governments are doing it whether we like it or not.
Yeah.
It's not a question that if the
british do it then the russians will say okay so we'll also do it the russians are doing it anyway
will americans start to do it will they start to if if someone speaks out against trump
and he has access to their identity and information can he go look at them and get them arrested
if we reach that point when the courts will allow such a thing, then we are in very deep trouble already.
And what we should realize is that with the surveillance technology now in existence, a totalitarian government has so many ways to know who you are that that's not the main issue.
You talked about the platforms being responsible for the consequences yes in the uk over the last month we've had i don't know if
you've heard that we had some riots and um i think it was all triggered originally when there was
news that broke that a someone had murdered some young children yes Yes. And there was a confusion or sort of a misinformation
around that person's religion.
And that meant that people...
That's an excellent example
because if I personally,
privately say to just two of my friends,
I think the person who did it is X,
I don't think you should be persecuted for that.
I could say it in my private living room and it's the same thing if I say it on WhatsApp
or on Facebook.
But if a Facebook algorithm picks up this piece of fake news and starts recommending
it to more and more users, then Facebook is liable for the action of its algorithms. You should be able to take it to
court and say the algorithm deliberately recommended a piece of fake news. And again,
if the fake news was produced by an influencer with a million followers, then he is also liable for that.
But if a private individual in a private setting said something which is not true,
it's fake news, and then an algorithm deliberately spread it,
the main fault is with the algorithm, and the people who should be in jail are the managers of the company that
owns this algorithm and not the individual who uttered the words. Going back to the riots issue,
let's say that, I don't know, the Guardian, on the day of the riots, decided to pick up a piece of this fake news and publish it on its front page.
And they now take the editor of The Guardian to court, and he says, but I didn't write it. I just found this piece of fake news and decided to put it on the front page of The Guardian.
Now, it would be obvious to us that the editor did something very, very, very wrong.
And he might or she might have to sit in jail.
And it's not the problem of the person who originally produced the piece of fake news.
If you're the editor of one of the biggest newspapers in the country, and you decide to publish something on your front page,
you had better be very, very sure that what you're publishing is the truth,
especially if it can incite to violence. How would a social network owner know that?
How would they be able to verify that everything is true at that scale?
Not everything. But if, for instance, something is likely to lead to violence. It's a precautionary principle. First of all, do no harm. Again, I'm not asking
Facebook to censor the piece of fake news. I'm only asking it, don't get your algorithms to
spread it on purpose in order to get user engagement and make a lot of money. If you're
not sure about it, just don't spread it.
It's as easy as that. How does it know it's fake news versus it thinking that it's actually
really important, life-saving news? So for example... That's the responsibility of the
company. How does the editor of The Guardian know, or of the Financial Times, or of the Sunday Times,
how do they know if something
is true, and if something should be published on the front page? If you are now managing a social
media company, you are managing one of the most powerful newspapers in the world. And you should
have the same kind of responsibilities and the same kind of expertise. If you have no idea how
to judge whether an
algorithm should recommend something to millions of people, you're in the wrong business. You know,
if you can't stand the heat, get out of the kitchen. Don't run a social media company if
you don't know what should be shown to millions of people. It's very pertinent because obviously
Mark Zuckerberg's letter that he wrote this week says
i was approached by the fbi who told me that russia were trying to influence the elections
and they were given some information that there was this laptop story joe and joe biden hunter
biden who's joe biden's son had this laptop story which facebook didn't know if it was real or not
and they thought maybe it was a russian um
plant i russia had put the story there to try and make sure joe biden didn't win the elections
so facebook deprioritized it stopped it going viral and suppressed it turns out it was a real
story and it wasn't fake and mark zuckerberg says he regrets suppressing it because it was in fact a
real real story and suppressing it he kind of influenced the election to some degree.
So it's so complicated to the point that I just can't...
It's complicated to run a big media company.
It's complicated to run the Wall Street Journal or Fox News.
And then what happens if the FBI comes to Fox News or comes to the Wall Street Journal and tells them, look, there is this story planted by the Russians.
Don't encourage it.
And later on, it turns out that it was wrong.
Could happen.
And as the manager of the Wall Street Journal, you need to deal with it.
And do I trust the FBI under what conditions?
Sometimes I should.
Sometimes I should be suspicious. I feel like you're going to end up in jail.
If you're the editor of the Wall Street Journal, you're going to end up in jail either way,
because either way you're influencing elections. But that's the business. I mean,
the real problem is when you have extremely powerful people like Zuckerberg or Elon Musk that pretend that they don't have power, that they don't have influence, that they don't shape elections.
We know for centuries that the owners and editors of newspapers, they shape elections.
And therefore we hold them to certain standards.
And now the owners and managers of platforms like Twitter and YouTube and Facebook,
they have more power than the New York Times or the Guardian or the Wall Street Journal.
And they should be held to at least the same degree of accountability. And their shtick that,
oh, we are just a platform. We just allow everybody to publish what they want. It doesn't work like that. And we don't accept it with traditional media. So why should we accept
it? That's the whole trick of these tech companies, that again, we have thousands of years of history
and they tell us, oh, it doesn't apply to us. Like if you have a traditional industry like cars, it's obvious to everybody,
you cannot put a new car on the road unless you made some safety checks to make sure the car is
safe. You cannot put a new medicine on the market or a new vaccine on the market without safety.
That's obvious, right? But when it comes to algorithms, no, no, no, no, no. That's a
different set of rules. you can put any algorithm
you want on the market you don't need any safety rules and even more basic than that you think
about something like theft you have the 10 commandments don't steal and you know people
know yes you shouldn't steal until it comes to information ah no no it doesn't apply to
information i can take your information and without your permission, do all kinds of things with it and sell it to third parties.
And this is not stealing. Don't steal doesn't apply to my line of business. And this is what
the tech giants have been doing in many cases over the last decade or two, telling us that
history doesn't apply to them.
That all the wisdom that humanity gained in a very painful way over centuries and thousands of years of dealing with dictatorships and with whatever, it doesn't apply to the new
technology.
And it does.
It does apply.
Do you ever feel tempted to just log off and just like go live in a field somewhere, maybe like a desert, maybe just create a little bit of a cult?
I do it every year.
Oh, really?
Yeah, I take a long meditation retreat of between 30 days and 60 days.
Like this year, I plan in December after the book tour is over to go 60 days for meditation retreat in India and just completely disconnect.
No smartphone, no internet,
not even books or writing paper.
Just an information fast.
Why?
It's good for the mind.
Again, like with food,
too much in isn't good for us.
We need time to digest and to detoxify.
And it's true of the mind as well.
If you just keep bombarding it with more, you get addicted to the wrong things.
You develop bad habits. time off in order to really kind of digest everything that happened and to decide what
I want and what I don't want, what kind of habits, addictions I should try to be rid of.
And also to, you know, to get to know my own mind. When the mind is constantly bombarded
by information from outside,
it's so noisy,
you cannot get to know it
because there is so much noise.
But when the noise goes away,
then you can start
to understand what is the mind?
How does it function?
How does it work? Where do thoughts
come from? What is fear?
What is anger? When you're
boiling with anger because of
something you now read,
you are focused
on the object of your anger,
but you can't understand the anger itself.
The anger controls you.
When you have an information fast,
you can just observe
what happens to me when I'm angry?
What happens to my mind, to my body?
How does it control me?
And this is more important
than any angry story in the world,
to understand what anger actually is.
It's very, very difficult.
I mean, how many times do people stop and just, you know,
try to get to know their anger?
And not the object of the anger.
This is what we do all the time.
We kind of replay.
We heard something terrible that a politician we don't like,
like, I don't know, somebody's angry about Trump. So he would replay it again. And oh, he said like this, he did like
that, he will do this, he will do that. And you don't get to know your anger that way.
I have about 50 different companies in my portfolio at Flight Group now, some of which
I've invested in and some of which I've co-founded or founded myself. One thing I've noticed is that
most companies don't put enough effort into the hiring process. In my mind, the first and most critical
thing in business is assembling your group of people because the definition of the word company
is group of people. And throughout all of my companies, whenever I'm looking to hire someone,
my first port of call is LinkedIn Jobs, who I'm happy to say are also a sponsor of this podcast.
They've helped us source professionals who we truly can't find anywhere else, even those who aren't actively
searching for a new job, but who might be open to a perfect role. In fact, over 70% of LinkedIn
users don't visit other leading job sites. So if you're not looking on LinkedIn, you're probably
looking in the wrong place. So today, I'm giving the Diary of a CEO community a free LinkedIn job post. Head to linkedin.com slash DOAC now and let me know how you get on.
Terms and conditions apply.
So interesting.
I was playing out the scenarios in my head as you were speaking of this future where
there's almost these two species of human.
You have one species of human who are connected to the information highway through the internet,
through the neural link in their brain.'s just like they're hooked yeah and the algorithm is feeding them information and
they're acting upon it and they're feeding it and then you have this other group of people who
decided to reject that who didn't get the neural link who aren't trying to interface with ai
and that are living in a tribe in some jungle somewhere and i like my girlfriend said this to
me many years ago she's gonna i think there's gonna be a split yeah and i kind of like you know whatever but now i'm like
i can see why as things get more extreme you go to know what i'm gonna make a decision here
and especially when i saw the neural link that elon musk's working on that allows you to control
computers with your brain i sat down with the computer to control your brain also yeah
you're right i actually
didn't think about that but i just imagined um and this is a question for everyone listening
if there's you and me and i have the chip in my brain that now humans have in their brain that
they're using to control computers with i am a different species to you because i can control
the i can control my car downstairs i can control the lights in this room i can i can control my car downstairs. I can control the lights in this room.
I can ask my brain questions and get the answers.
My IQ becomes 5,000.
Yours is still 150 or 200.
Yours is probably 250.
But I'm a different species to you.
I have such a huge competitive advantage over you that if you don't get the chip,
then you're screwed.
That's speciation.
Yeah.
Again, on a small scale, we saw it before in history.
There were the people who adopted the written document and the people who rejected it.
And they are not with us anymore.
Because the people who adopted the written document, they built these kingdoms and empires and they conquered everybody else.
And we are in danger
of the same thing happening.
And this is not a good thing
because it's not like life was better
for the people with the documents.
In many cases,
life was better for the hunter-gatherers
who lived before.
So what's the solution?
If I had to, you know,
having read your book, brilliant book, Nexus, A Brief History of Information Networks from the Stone Age to AI, what is the solution? How do we stop the alignment problems, us all becoming paperclips, the social chaos, the misinformation, the silicon curtain, as you talk about in the book? How do we stop these things destroying our world? Is there hope? Are you optimistic?
The key is cooperation, is connection between humans. I mean, the humans are still more
powerful than the AIs. The problem is that we are divided against each other and the algorithms
unintentionally are increasing the divide. And then this is the oldest rule of every empire is divide and rule.
This was the rule of the Romans, of the British Empire.
If you want to rule a place, you divide the people of that place against one another,
and then it's easy to manipulate and control them.
This is now happening to the entire human species with AI.
That just as we had kind of, you know, the iron curtain in the Cold War,
now we have the silicon curtain, dividing not just China from the US,
but also Democrats from Republicans, also one person from another person,
and all of us from the AIs, which increasingly make the decisions about all that.
We still have the power for, I don't know, five years, 10 years, 20 years,
to make sure it doesn't go in dystopian direction.
But for that, we need to cooperate.
Are you optimistic?
I try to be a realist.
I just came from Israel
and I saw a country destroying itself
for no good reason whatsoever.
It's a country that just pressed the self-destruct button
and for no good reason.
And it can happen on a global scale.
What do you mean it pressed the self-destruct button?
It's not just the war between Israelis and Palestinians,
but Israeli society turning against itself,
greater and greater division and animosity.
And it's like a dark hole of anger and of violence,
which is sucking more and more people in,
you know, all over the world,
you now feel the shockwaves
from this dark hole in the Middle East.
And there is no good reason.
There is no objective reason.
If I'll say something
about the Israeli-Palestinian conflict,
there is no objective reason for it.
It's not like there is not enough
land between the Mediterranean and the Jordan River that people have to fight for the little
land there is, or that there is not enough food. There is enough food for everyone to eat.
There is enough land to build houses and hospitals and schools for everyone.
Why do people fight? Because of different stories in their minds. They have these different
mythologies that God gave this
whole place just to us. You have
no right to be here.
And they fight over that.
And
this is a
local, original tragedy.
It can happen on a global scale.
And if something
ultimately destroys us, it will be our a global scale. Again, if something ultimately destroys us,
it will be our own delusions, not the AIs.
The AIs, they get their opening because of our weaknesses,
because of our delusions.
Yuval, thank you so much for writing a book.
I think this book is one of the most well-timed books
that I've ever come across
because of everything that's happening in the world right now. And it really helped me to understand that the problem isn't necessarily
me versus you. If you're on the other side of the aisle, the problem is information,
the networks of information that we consume, who's controlling those networks of information.
Somebody is manipulating us to be on different sides, not just to be on different sides,
but to see each other as enemies.
And right now that's a person.
But it might not be.
Soon it might not be a person, no.
And understanding that I think helps us focus on the root cause of issues that are sometimes hard to identify.
I think the problem is my neighbor.
I think it's that person with different color skin. But actually, if you look one level deeper, it's the information networks
and what I'm being exposed to
that are brainwashing me and creating those stories.
And as you talk about in your previous book,
stories are ultimately what are running the world.
And it's this wonderful,
the Nexus is just a wonderful book at a wonderful time
that helps us to access this knowledge
of the power of information
and how it impacts democracy and relationships
and society and business and everything in between
in a way that I hope will lead to action.
And I think that is something to be optimistic about.
Yeah, and ultimately, I think most humans are good.
They're good people.
When you give people bad information, they made bad decisions.
The problem is not with the humans.
It's with the information.
Amen.
Yuval, we have a closing tradition on this podcast
where the last guest leaves a question to the next guest,
not knowing who they're going to be leaving it for.
Oh, okay.
And the question left for you is what does it mean to be strong
um
to accept reality as it is
to deal with reality without trying to hide it, disappear it, put a veil over it.
So interesting.
I think you're right.
I think you're right.
Certainly not the answer I would have given.
But, you know, you come to...
What would you say?
Oh, what would you say um oh what would i say i guess i probably would have spoken to like perseverance in the face of a lot of different
difficulties and one of those is information but it's just that the idea of like persevering
towards whatever your subjective goal is in the face of and in spite of a variety of
different difficulties maybe that strength um so that could be raising a kid or it could be going
to the gym or whatever but i like your definition as well because i think it's much more um important
in the times we find ourselves in and as on honestly as a podcaster you sometimes feel like
you're caught right in the middle of it because i think everyone's trying to figure out if i'm like
on the right wing on the left wing if i believe this if i endorse
every guest that i sit with and you almost have to try and remain impartial but it's very very uh
difficult to for people to understand that because they want you to fit somewhere
and they want because that's weakness i mean you to fit somewhere. And they want to, you know. Because that's weakness.
I mean, you have a lot of people who claim to be very strong.
Yeah.
Who admire strength as a value.
Yeah.
But they can't deal with parts of reality that don't fit into their worldview or their desire.
Yeah.
And they think that strength is, I have the strength to just make
these parts of reality disappear. And no, this is weakness. And I am sorry for going back to that,
but this is also the war. What is war is trying to disappear a part of reality that you don't like.
In this case, an entire people. I don't like these people.
I don't think they should be in reality. So I try to make them disappear. And people say,
oh, he's a very strong leader. He's not. He's a very weak leader. That a strong leader would
be able to acknowledge, no, these people exist. They are part of reality. Let's now find out how do we live with them.
Amen. Your book Nexus, A Brief History of Information Networks from the Stone-Aged AI,
is a must-read for everybody that listens to this podcast and has any interest in these subjects at
all. It's endorsed by two of my favorite people, Mustafa Suleiman, but also Stephen Fry and Rory
Stewart, who's a great person as well. And it's endorsed for a very good reason
because it's a completely mind-expanding book
written from someone who only writes
exceptional culture-shifting books.
So I'm going to link it below.
I highly recommend anybody
that's listened to this conversation
and that's interested in this subject matter
to go and get this book right now.
It's available right now for pre-order
and then it's shipping in five days from now
when it releases. So be the first to read it and hopefully be the first to understand and action
some of the things that you learn in this book. Yuval, thank you so much for your time. Thank you.
Isn't this cool? Every single conversation I have here on the Diary of a CEO, at the very end of it, you'll know, I ask the guest to leave a question in the Diary of a CEO. And what we've done is we've turned every
single question written in the Diary of a CEO into these conversation cards that you can play at home.
So you've got every guest we've ever had, their question, and on the back of it if you scan that QR code you get to watch the person who
answered that question. We're finally revealing all of the questions and the people that answered
the question. The brand new version 2 updated conversation cards are out right now at
theconversationcards.com. They've sold out twice instantaneously. So if you are
interested in getting hold of some limited edition conversation cards, I really, really recommend
acting quickly. Thank you.