Your Undivided Attention - 'A Turning Point in History': Yuval Noah Harari on AI’s Cultural Takeover
Episode Date: October 7, 2024Historian Yuval Noah Harari says that we are at a critical turning point. One in which AI’s ability to generate cultural artifacts threatens humanity’s role as the shapers of history. History will... still go on, but will it be the story of people or, as he calls them, ‘alien AI agents’?In this conversation with Aza Raskin, Harari discusses the historical struggles that emerge from new technology, humanity’s AI mistakes so far, and the immediate steps lawmakers can take right now to steer us towards a non-dystopian future.This episode was recorded live at the Commonwealth Club World Affairs of California.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIANEXUS: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills: a New York Times op-ed from 2023, written by Yuval, Aza, and Tristan The 2023 open letter calling for a pause in AI development of at least 6 months, signed by Yuval and Aza Further reading on the Stanford Marshmallow Experiment Further reading on AlphaGo’s “move 37” Further Reading on Social.AIRECOMMENDED YUA EPISODESThis Moment in AI: How We Got Here and Where We’re GoingThe Tech We Need for 21st Century Democracy with Divya SiddarthSynthetic Humanity: AI & What’s At StakeThe AI DilemmaTwo Million Years in Two Hours: A Conversation with Yuval Noah Harari
Transcript
Discussion (0)
Hey everyone, it's Aza.
So today we'll be bringing you the public conversation I just had with our friend Yuval Noah Harari.
He's the author of A.NUval Noah Harari. He's the author of A.N.'s a brief history of information networks from the Stone Age to AI.
We sat down at the Commonwealth Club of California with moderator Shireen Gaffari, who's the senior AI reporter at Bloomberg News.
It was a fascinating conversation. It covered a lot of ground.
We talked about the historical struggles that emerge from the invention of new technology,
humanity's relationship to technology, whether we're a one or a two marshmallow species,
what Move 37 means for global diplomacy, and fundamentally, how we as humanity can survive ourselves.
We also talked about the immediate steps lawmakers can take right now to steer us towards a non-dispopian future.
One of the points we return to again and again
that we have a short window before AI is fully entangled with society
in which we can make choices that will decide the future
we will all be forced to live in.
I hope you enjoy the conversation as much as I did.
Oh, and one last thing.
Please don't forget to send us your questions.
You can email us at Undividedat HumaneTech.com
or just record a voice memo on your phone
so we can hear your voice and then send that to us.
If there's anything you've heard on the show
that you want us to go deeper on or explore,
we want to hear from you.
Hello and welcome to tonight's program
hosted by the Commonwealth Club World Affairs
and the Center for Humane Technology.
My name is Shereen Gaffari.
I'm an AI reporter for Bloomberg News
and your moderator for tonight's conversation.
Before we get started, we have a few reminders.
Tonight's program is being recorded,
so we kindly ask that you silence your cell phones
for the duration of our program.
And also, if you have any questions for our guest speakers,
please fill them out on the cards that were on your seats.
Now, it is my pleasure to introduce tonight's guests,
Yuval Noah Harari, and Aza Raskin.
Yuval Noah Harari is a historian, public intellectual,
and best-selling author who has sold over 45 million books in 65 languages.
He's also the co-founder of Sapienship,
an international social impact company focused on education and storytelling.
Yuval is currently a distinguished research fellow
at the University of Cambridge Center for the Study of Existential Risk,
as well as a history professor at the Hebrew University of Jerusalem.
His latest book is Nexus,
a brief history of information networks from the Stone Age to AI.
Aza Raskin is the co-founder of the Center for Humane Technology
and a globally respected thought leader
on the intersection of technology and humanity.
He hosts the TED podcast Your Undivided Attention
and was featured in the two-time Emmy-winning Netflix documentary, The Social Dilemma.
Yuvala and Aza, welcome.
Thank you. It will be here.
Let me first start off by asking you,
about a year and a half ago, and I want to pose this to you both,
there was a letter.
You signed this letter, and Aza, I'm curious to hear your thoughts about it,
but I want to talk about what that letter said
and where we're at a year and a half from now, from then.
So this letter was a call to pause AI development,
a call on the major AI labs to halt progress
of any kind of AI models at the level of GPT4.
That didn't happen.
I don't think anybody expected it.
It was a PR.
you know, trick.
I mean, nobody really expected everybody to stop.
Right.
But what do we make of the fact of the moment that we're in right now,
which is that we are seeing this unprecedented race
by some of the most powerful technology companies in the world
to go full speed ahead toward reaching some kind of artificial general intelligence
or super intelligence?
I think things have only sped up, right?
Yeah, absolutely.
I think the key question is really all.
about speed and all about time.
And in my profession, I'm a historian,
but I think history is not the study of the past.
History is the study of change, how things change.
And at present, things are changing at a faster rate
than in any previous time in human history.
And for me, that's the main problem.
I don't think that AI necessarily is a bad technology.
it can be the most positive technology
that humans have ever created
but the thing is that AI moves
it's an inorganic thing
it's an inorganic entity
it moves at an inorganic speed
and humans are organic beings
and we move much much much slower
in comparison
humans are extremely adaptable animals
but we need time to adapt
and that's the main requirement
from how to deal effectively,
positively with the AI revolution,
give us time.
And when you talk with the people leading the revolution,
most of them,
maybe after an hour or two of discussion,
they generally say,
yes, it would be a good idea to slow down
and to give humans a bit more time,
but we cannot slow down
because we are the good guys
and we want to slow down,
but our competitors will,
not slow down. Our competitors either here in another corporation or across the ocean, in another
nation. And you talk to the competitors, they say the same thing. We would like to slow down,
but we can't trust the others. And I think the key paradox of the whole AI revolution is that
you have people saying we cannot trust the humans, but then they say, but we think we would be
able to trust the AIs.
Because when you raise then the issue of how can we trust these new intelligences that
we are creating, they say, oh, we think we can figure that out.
Yeah.
So Issa, I want to pose this to you first.
If we shouldn't trust the AI, who should we trust?
Hmm.
Here's, I guess, the question to ask, which is, if you were to look back through history and
give any one group a trillion times more power than any other group, who would you trust?
Like, which religion?
Like, which government?
The answer is, of course, none of them.
And so this is the predicament we find ourselves in, which is, you know, how do we find
trust for technology that is moving so fast that if you take your eyes off of Twitter,
you are already behind.
There's a, you know, thinking about like that pause letter
and like, what did it do?
It's interesting because there was a time before that letter
and people were not yet talking about the risks of AI.
And after that letter, everyone was talking about it.
In fact, it paved the way for another letter
from the sector for AI safety,
where they had many of the least,
leaders of AI say that we need to take the threat of AI as seriously as pandemics and nuclear war.
What we need is for the fear of all of us losing to become greater than the fear of me losing to you.
It is that equation that has to shift to break the paranoia of, well, if I'm not going to do it, then somebody else will, so therefore I have to go forward.
And just to set up the stakes a little bit,
and why exactly you say that it's ridiculous
to think that letter was meant to even stop AI development.
I think there's a good analogy here,
which is what oil is to physical labor,
that is to say every barrel of oil is worth 25,000 hours of physical labor,
somebody moving something in the world.
What oil is to physical labor, AI is to cognitive labor.
That thing that you do when you open up an email and type,
we're doing research.
And that really sets up the race
because you could ask the exact same question,
why did we have the Paris Climate Accords?
And yet nothing really happened.
And it's because the center of our economy,
the center of competition,
runs through cognitive and physical labor.
I want to talk for a second about just the reverse,
the kind of accelerationist argument for AI.
What do you say to the technologists,
and we're here in the heart of Silicon Valley,
where I grew up, you grew up, right?
People say, don't sweat the risks too much.
You know, sure, we can think about and anticipate them,
but we just have to build because the upside here is so immense.
There are benefits for medicine.
We can make it more affordable for the masses.
Personalized education is a UD research about communicating with animals.
It is so cool.
I want to talk about that to you.
But Yvall, I want to ask you first,
what do you make of that kind of classic sort of Silicon Valley,
techno-optimist counter-argument
that if we are too
fixated on the negatives,
we are never going to develop this potentially
immensely helpful for society technology.
First of all, nobody is saying
don't develop it. Just do it
more slowly.
I mean, we are aware,
even the critics, again,
part of my job as a historian and a philosopher
is to kind of
shine a light on the threats
because the entrepreneurs, the engineers,
the investors, they obviously,
focus on the positive potential.
Now, I'm not denying the enormous positive potential,
whether you think of healthcare,
whether you think of education, of solving climate change,
of, you know, every year about more than a million people
die in car accidents, most of them caused by human error,
somebody drinking alcohol and driving, falling asleep at the wheel,
things like that.
The switch to self-driving vehicles is likely to save a million people
every year. So we are aware of that. But we also need to take into account the dangers, the threats,
which are equally big. Could in some extreme scenarios be as catastrophic as a collapse of
civilization? To focus, to give just one example, very primitive AIs, the social media
algorithms, have destabilized democracies all over the world. We now, we now,
in this paradoxical situation when we have the most sophisticated information technology in history
and people can't talk to each other and certainly can't listen. It's becoming very difficult
to hold a rational conversation. You see it now in the U.S. between Republicans and Democrats
and you have all these explanations. Oh, it's because of U.S. society and economics and
globalization, whatever. But you go to almost every other democracy in the world in my home country
in Israel, you go to France, you go to Brazil, it's the same.
It's not the unique conditions of this or that country.
It's the underlying technology that makes it almost impossible for people to have a conversation.
Democracy is a conversation and the technology is destroying the ability to have a conversation.
Now, is it worth it that we have, okay, we get these benefits, but we lose democracy all over the world.
And then these technology is in the hand of authoritarian regimes
that can use it to create the worst totalitarian regimes,
worst dystopias in human history.
So we have to balance the potential benefits
with the potential threats and move more carefully.
And actually, this thing,
I really want the audience to do like a find and replace
because we'll always get asked,
do the benefits outweigh the risks?
and social media taught us
that is the wrong question to ask.
The right question to ask is
will the risks
undermine the foundations of society
so that we can't actually enjoy the benefits?
That's the question we need to be asking.
So if we could go back in time
to say 2008, 2009, 2010,
and instead of social media deploying
as fast as possible into society,
we said, yes, there are a lot of benefits,
But let's just wait a second and ask what are the incentives
that are going to govern how this technology is actually rolled out into society,
how it'll impact our democracies, how to impact kids' mental health.
Well, the reason why we're able to make the social dilemma
and we started calling in 2013, the direction that social media is going to take us,
was because we said, well, just like Charlie Munger said,
who's Warren Buffett's business partner, show me the incentive.
And I'll show you the outcome.
What is the incentive for social media?
It's to make more reactive and get reaction from your nervous system.
And as soon as you say it that way, you're like,
well, of course, the things that are outrageous,
the things to get people mad,
that essentially cold civil wars are very profitable
for engagement-based business models.
It's all foreseeable outcomes from a business model.
So the question we should be asking ourselves now with AI,
Because once social media became entangled with our society,
it took hostage GDP, it took hostage elections
because you can't win an election unless you're on it,
took hostage news and hauled news out.
Once it's all happened, it's very hard to walk back and undo it.
So what we're saying is we need to ask the question now,
well, what is the incentive driving the development of AI?
Because that, not the good intentions of the creators,
is going to determine which world we live in.
Maybe I'll make a very strange historical comparison here
that Silicon Valley reminds me a little of the Bolshevik party.
Controversial analysis, but okay, I'll hear you.
In around, you know, after the revolution, they thought,
I mean, there are huge differences, of course,
but two things are similar.
First of all, the ambition to re-engineer society from scratch,
we are the vanguard most people in the world don't understand what is happening we are this small vanguard that understands and we think we can re-engineer society from its most basic foundations and create a better world a perfect almost perfect world and the other common thing is that if you become convinced of that it's an open check to do some terrible things on the way because
you say we are creating utopia, the benefits would be so immense that, as the same goes,
to make an omelet, you need to break a few eggs. So, I mean, this belief in creating the best
society in the world, it's really dangerous because then it justifies a lot of short-term harm
to people. And of course, in the end, maybe you don't get to be. To be.
build the perfect society.
Maybe you misunderstood.
And really the worst problems come,
not again from the technical glitches of the technology,
but from the moment the technology meets society.
And there is no way you can simulate history in a laboratory.
Like when there is all these discussions about safety,
and the technology companies,
the tech giants tell us,
we tested it, this is safe.
For me, the historian, the question,
how can you test history in a laboratory?
I mean, you can test that it is safe
in some very limited, narrow sense.
But what happens when this is in the hands
of millions of people,
of all kinds of political parties, of armies,
do you really know how it will play out?
and the answer is obviously no nobody can do that
there are no repeatable experiments in history
and there is no way to test history in a laboratory
I have to ask you all
you've had a very welcome reception in Silicon Valley
in tech circles over the years I've talked to tech executives
who are big fans of your work of sapiens
now with this you know this new book
which has a pretty I would say critical outlook
about some of the risks here of this technology
that everyone is so excited about in Silicon Valley.
How has your interactions been with tech leaders recently?
They have them to receiving this book.
I know you've been...
I mean, it's just out, so I don't know yet.
But what I do know is that many of these people
are very concerned themselves.
I mean, they have kind of public face
that they are very optimistic
and they emphasize the benefits and so forth.
but they also understand, maybe not the risks,
but the immense power of what they are creating
better than almost anybody else.
And therefore, most of them are really worried.
Again, when I mentioned earlier this kind of thing
that the arms race mentality,
if they could slow down,
if they thought they could slow down,
I think most of them would like to slow down.
But again, because,
they are so afraid of the competition
they are in this
ars race mentality
which doesn't allow them to do it
and it's
you mentioned the word
excited and you also talked about the excitement
I think there is just far too much
excitement in all that
and there is
really it's the most misunderstood word
in the English
English language, at least in the United States.
People don't really understand what the word excited means.
They think it means happy.
So when they meet you, they tell you,
oh, I'm so excited to meet you.
And this is not the meaning of the word.
I mean, happiness is often calm and relaxed.
Oh, I'm so relaxed to meet you.
And excited is like when all your nervous system
and all your brain is kind of on fire.
And this is good sometimes, but a biological fact,
A technological fact about human beings and all other animals is that if you keep them excited
all the time, they collapse and die.
And I think that the world as a whole and the United States and Silicon Valley is just
far too excited.
You know, we are currently starting to have these debates about whether AI is conscious.
even clear that humanity is.
And when I think, actually, I mean, you're the historian, so please jump in if I'm getting
something wrong.
But when I think about humanity's relationship with technology, we've always been a species
co-evolving with our technology, we'll have some problem, and we'll use technology to solve
that problem, but in the process, we make more, bigger, different problems.
And then we say, keep going.
And so it's sort of like humanity is like, we have a can, and we can.
kick it down the road and it gets a little bit bigger
but that's okay because next time around
we can kick the can down the road again
and it gets a little bigger
and by and large
I think we've made you could argue
really good trades with technology
like we all would rather not live
probably in a different era than
now so we're like okay maybe
we've made good trades and those externalities
are fine but now that can
is getting so big to be the size
of the world right we invent plastic
and Teflon, amazing, but we also get forever chemicals.
And the New York Times just said that the cost to clean up forever chemicals
that are unsafe levels for human beings,
it's causing farm animals to die,
would cost more than the entire GDP of the world every year.
We're at the breaking points of our biosphere,
of our psychosocial sphere.
and so it's unclear if we can kick the can down the road any further
and if we take AI which we have this incredible machine called civilization and has pedals
and you peddle the machine you get skyscrapers and medicine and flights and all these amazing
things but you also get forever chemicals and ozone holes mental health problems
and you just take AI and you make the whole system more efficient and the pedals go faster
Do we expect that the fundamental boundaries
of what it is to be human and the health of our planet,
do we expect those things to survive?
And to me, this is a much scarier sort of direction
than what some bad actors are going to do with AI.
It's what is our overall system going to do with AI.
And maybe I'll just add to that.
In history, usually the problem with new technology,
is not the destination, but the way there.
Yeah, right.
That when a new technology is introduced
with a lot of positive potential,
the problem is that people don't know
how to use it beneficially and they experiment,
and many of these experiments turn out to be terrible mistakes.
So if you think, for instance,
about the last big technological revolution,
the industrial revolution.
So when you look back,
and I had these conversations many times,
times like with the titans of industry and they will tell something like you know when they
invented the train or the car there were all these apocalyptic prophecies about what it will do to
human society and look things are now much much better than they were before the inventions of
these technologies but for me the historian the main issue is how we what happened on the way
like if you just look at the starting point at the end point like the year is 1800
before the invention of trains and telegraphs and cars and so forth,
and you look at the end point, let's say the U.S. 2000,
and you look at almost any measure except the ecological health of the planet,
let's put that aside for a moment if we can.
You look at every other measure, life expectancy, child mortality,
women dying in childbirth, it's all going, it all went up dramatically.
Everything got better, but it was not a straight line.
The way from 1800 to 2000 was a roller coaster
with a lot of terrible experiments in between.
Because when industrial technology was invented,
nobody knew how to build an industrial society.
There was no model in history.
So people tried different models.
And one of the first big ideas that came along
was that the only way to build an industrial society,
is to build an empire.
And there was a rationale, a logic behind it
because the argument was
agrarian society can be local,
but industry needs raw materials,
it needs markets.
If we build an industrial society
and we don't control the raw materials
and the markets, our competitors,
again, the arms race mentality,
our competitors could block us and destroy us.
So almost any country that industrialized,
even a country like Belgium,
When it industrializes in the 19th century,
it goes to build an empire in the Congo.
Because this is how you do it.
This is how you build an industrial society.
Today we look back and we say this was a terrible mistake.
Hundreds of millions of people suffered terribly for generations
until people realized actually you can build an industrial society
without an empire.
Other terrible experiments were communist and fascist totalitarian regimes.
Again, the argument, it was not something divorced from industrial technology.
The argument was the only way these enormous powers released by the steam engine,
the telegraph, the internal combustion engine, democracies can't handle them.
Only a totalitarian regime can harness and make the most of these new technologies.
And a lot of people, again, going back to the Bolshevik revolution,
a lot of people in the 1920s, 30s, 40s, were really convinced
that the only way to build an industrial society
was to build a totalitarian regime.
And we can now look with insight and say,
oh, they were so mistaken.
But in 1930, it was not clear.
And again, my fear, my main fear with the AI revolution
is not about the destination,
but it's the way there.
Nobody has any idea how to build an AI-based society.
And if we need to go through another cycle of empire building
and totalitarian regimes and world wars to realize,
oh, this is not the way, this is how you do it.
These are the very bad news.
You know, as a historian, I would say that the human species
on the test of the 20th century,
how to use industrial society,
our species got a C-minus.
Enough to pass,
most of us are here,
but not brilliant.
Now, if we get a C-minus
on how to deal, not with steam engines,
but on how to deal with AI,
these are very, very bad news.
What are the unique potential failed experiments
that you worry could play out in the short term with AI?
Because if you look at those kind of catastrophic
or existential risks,
we haven't seen them yet, right?
If you discount the collapse of democracies.
I mean, for very primitive AIs.
I mean, the social media algorithms,
and maybe go back really to the basic definition
of what is an AI,
not every machine and not every computer or algorithm is an AI.
For me, the distinct feature,
what makes AI AI, is the ability to make the system,
by itself and to invent new ideas by itself to learn and change by itself.
Yes, humans design it, engineer it in the first place,
but they give it this ability to learn and change by itself.
And social media algorithms in a very narrow field had this ability.
The instruction, the goal they were given by Twitter and Facebook and YouTube
was not to spread hatred and outrage and destabilized democracies.
The goal they were given is increase user engagement.
And then the algorithms, they experimented on millions of human guinea pigs
and they discovered by trial and error
that the easiest way to increase user engagement is to spread outrage.
That this is very engaging outrage, all these hate-filled conspiracy theories
and so forth, and they decided to do it.
And there were decisions made by a non-human intelligence.
Humans produced enormous amount of content,
some of it full of hate,
some of it full of compassion, some of it boring.
And the algorithms decided, let's spread,
the hate-filled content, the fear-filled content.
And what does it mean that they decided to spread it?
they decided that this will be at the top of your Facebook News Feed.
This will be the next video on YouTube.
This will be what they will recommend or autoplay for you.
And, you know, this is one of the most important jobs in the world.
Traditionally, they basically took over the job of content editors and news editors.
And, you know, when we talk about automating jobs, we think about automating taxi drivers.
automating coal miners.
It's amazing to think that one of the first jobs in the world
which was automated was news editors.
I picked the wrong profession.
And this is why we call,
first contact with AI was social media.
And how did we do?
We sort of lost.
It's not a C minus and F.
Yeah, exactly.
F, wow.
What about all the people who have positive interactions in social media?
You don't give some grave inflation for that?
I mean, I met my husband online on social media 22 years ago, so I'm also very grateful
to social media, but again, when you look at the big picture and what it did to the basic
social structure, the ability to have a reasoned conversation with our fellow human beings,
with our fellow citizens, I mean, on that, when I said we like, on that we get an end.
How can we pass around information?
Information, which is the topic of your book?
And F in the sense that we are failing the test completely.
It's not like we are barely passing it.
We are really failing it all over the world.
And then we need to understand that democracy is, in essence, is a conversation
which is built on information technology.
For most of history, large-scale democracy was simply,
We have no example of a large-scale democracy from the ancient world.
All the examples of small city-states like Athens or Rome or even smaller tribes.
It was just impossible to hold a political conversation between millions of people spread over an entire country.
It became possible only after the invention of modern information technology,
first newspapers, then telegraphs and radio and so forth.
And now the new information technology,
information technology is undermining all that.
And how about with this kind of generative AI,
we're still in the really early phases of adopting as a society, right?
But how about with something like ChatchipT,
how do you think that might change kind of the information dynamic?
What are the specific information risks there
that are different than the social media algorithms of the past?
We've never had before non-humans about to generate
the bulk of our cultural content.
sometimes we call it the flipping it's the moment when the human beings content like our culture
becomes the minority um and of course then the question is like what are the incentives for that
so if you think ticot is uh is engaging and addicting now you have seen nothing as of like last week
facebook launched a imagine for you page where AI generates the thing it thinks you're going to like
Now, obviously, it's at a very early stage,
but soon, there's actually a network called social.a.i
where they tell you that every one of your followers
is going to be an AI, and yet it feels so good
because you get so many followers, and they're all commenting,
and even though you know it's cognitively impenetrable,
and so you fall for it, right?
This is the year, 2025, when it's not just going to be chat GPT
a thing that you go to and type into.
it's going to be agents
that can call themselves
that are out there
actuating in the world
doing whatever it is
a human being can do online
and that's going to make you think about
just one individual
that's maybe creating deepfakes themselves
talking to people, defrauding people
and you're like no
it's not just one individual
you can spin up a corporation scale set of agents
they're all going to be operating
according to whatever market incentives are out there
so that's just like some of
what's coming with generative AI.
Maybe I'll add to that
that before we even think in terms of risks and threats
or opportunities, is it good, is it bad,
just to stop for a moment and try to understand
what is happening, what kind of really turning point
in history we are at?
Because for tens of thousands of years,
humans have lived inside a human-made
culture. We are cultural
animals. Like, we live our
lives and we constantly
interact with cultural
artifacts, whether
it's texts or images,
stories, mythologies,
laws, currencies,
financial devices.
It's all coming out of
the human mind. Some humans
somewhere invented this.
And up till now,
nothing on the planet
could do that.
only human beings.
So any song you encountered,
any image, any currency, any religious belief,
it comes from a human mind.
And now we have on the planet
something which is not human,
which is not even organic,
it functions according to a completely alien logic in this sense
and is able to generate such things at scale,
in many cases better than most,
most humans, maybe soon better even than the best humans.
And we are not talking about a single computer.
We are talking about millions and potentially billions of these alien agents.
And is it good? Is it bad? Leave it aside.
Just think that we are going to live in this kind of new hybrid society
in which many of the decisions, many of the inventions are coming from a non-human consciousness.
Now, I know that many people here in the States, also in other countries,
now immigration is one of the most hotly debated topics.
And without getting into the discussion, who is right, who is wrong,
obviously we have a lot of people very worried that immigrants are coming
and they could take out jobs and they have different ideas
about how to manage the society and they have different cultural ideas,
ideas and we are about in this sense to face the biggest immigration wave in history coming not from
across the Rio Grande but from California basically and these immigrants from California from
Silicon Valley they are going to be enter every house every bank every factory every
government officer office in the world they are going straight not
you know, they're not going to replace
the taxi drivers.
And the first people they
replaced were the news editors
and they will replace the
bankers. They will replace
the generals. We can talk about what it's
doing to offer already now, like in the war in
Gaza. They will replace
the CEOs. They
will replace the investors.
And they have very,
very different cultural and social
ideas than we have.
Is it bad? Is it good?
you can have different views about this wave of immigration.
But the first thing to realize is that we've seen nothing like that in history.
It's coming very fast.
Now, again, I was just yesterday in a discussion that people said,
you know, Chad GPT was released almost two years ago,
and it still didn't change the world.
And I understand that for people who kind of run a high tech company,
two years is like eternity.
It is.
And the thinking culture, so two years,
Nothing changed in two years.
In history, two years is nothing.
You know, imagine that we are now in London in 1832.
And the first commercial railroad network,
the first commercial railroad line was opened two years ago
between Manchester and Liverpool in 1830.
And we are having this discussion and somebody says,
look, all this hype around trains, around steam engines.
It's been two years since they opened the first
rail line and nothing has changed.
But, you know, within 20 years or 50 years, it completely changed everything in the world.
The entire geopolitical order was upended, the economic system, the most basic structures
of human society.
Another topic of discussion in this meeting yesterday was the family, what is happening
to the family.
And when people said family, they meant what most people think about as family.
after trains came, after the Industrial Revolution,
which is the nuclear family.
For most of history, when people set family,
they thought extended family,
with all the aunts and uncles and cousins and grandparents.
This was the family, this was the unit.
And the Industrial Revolution,
one of the things it did in most of the world
was to break up the extended family.
And the main unit became the nuclear family.
And this was not the traditional family of humans,
This was actually an outcome of the Industrial Revolution.
So it really changed everything, these trains.
It just took a bit more than two years.
And this was just steam engines.
And now think about the potential of a machine
that can make decisions, that can create new ideas,
that can learn and change,
and we have billions of these machines everywhere,
and they can enter into every human,
relationship, not just families.
Like, let's get one example, like people writing emails.
And now I know many people, including in my family,
that's like they would say,
oh, I'm too busy to write this.
I don't need to think 10 minutes about how to write an email.
I'll just tell Chad GPD, write a polite letter that says no.
And then chat GPT writes a whole page
with all these nice phrases
and all these compliments
which basically says no
and of course on the other side
you have another human being
who says I don't have the time
to read now this whole letter
it's their GPT
tell me what did they say
and the GPT of the other side
they said no
do you use chat GPT yourself
I leave it to the other family
members and team members
I use it a little for translation
and things like that.
But I think it's also coming for me.
Yeah, definitely.
How about you, Aza?
Do you use Chachapiti or generative AI
in your day-to-day?
I do, absolutely.
How are you using it?
Incredible metaphorical search engine.
So, for instance, there's a great example
in Colombia Bogota where it was a coordination problem.
There were people, essentially terrible traffic infractions,
people like running red lights, crossing the streets.
They couldn't figure out how to solve it.
And so this mayor decided he was going to have mimes
walked down the streets and just make fun of anyone that was jaywalking.
And then they would video it and post it on television.
And lo and behold, within a month or two,
like people's behavior started to change.
Like the police couldn't do it, but turns out mimes could.
Okay, so that's a super interesting, like, nonlinear solution to a hard problem.
And so one of the things I like to ask ChatGPT is like, well, what are other examples like that?
And it does a great job doing a metaphorical search.
But to go back to social media, because social media has a sort of first contact with AI.
It actually lets you see all of the dynamics that are playing out, because the first thing you could say is like, well, if once you know that it's doing something bad, can't you just unplug it?
I hear that all the time for AI.
sees he's doing the bads, just unplug it.
Well, Francis Howgan, who's
the Facebook whistleblower, it was able
to disclose a whole bunch of Facebook's own
internal data. And one of the things I don't
know if you guys know, but it turns
out there is one very
simple thing that Facebook
could do that would
reduce the amount of misinformation,
disinformation, hate speech, all
the terrible stuff, then
the tens of billions of dollars that they
are currently spending on content
moderation. You know what that one thing is?
it's just remove the reshare button after two hops.
I share to you, you share one other person,
then the reshare button goes away.
You can still copy and paste.
This is not even censorship.
That one little thing just reduces virality
because it turns out that which is viral
is likely to be a virus.
But they didn't do it because it hurt engagement a little bit,
which meant that they were now in a competition with TikTok,
everyone else, so they felt like they couldn't do it.
Or maybe they just wanted a higher start.
price. And this is even after the research had come out that said when Facebook changed their
algorithm to something called meaningful social interaction, which really just measured how
reactive the number of comments people added as a measure of meaningfulness, political
parties across Europe and also in India and Taiwan, went to Facebook and said, we know that
you change your algorithm. And Facebook's like, sure, tell us about that. And they said, no, we
know that you change the algorithm because we used to post things like white papers and positions
and they didn't get the most engagement but they got some now they get zero um and they told
facebook this is all in francis howgan's disclosures that they were changing their behavior to say
the click baity angry thing um and facebook still did nothing about it because of the incentives and so
we're going to see the exact same thing like with AI and this gets to like the fundamental question
for whether we as humanity are going to be able to survive ourselves and that is do you guys know
the marshmallow experiment yeah like you give a kid a marshmallow and if they don't eat it you say
I'll give you another marshmallow in 15 minutes and it sort of it tests the delayed gratification
thing um if we are a one marshmallow species we're not going to make it
if we can be the two marshmallow species,
and actually the one marshmallow species is even harder
because the actual thing with AI
is that there are a whole bunch of kids sitting around.
It's not just one kid waiting for the marshmallow.
There are many kids sitting around the marshmallow,
and any one of them can grab it,
and then no one else gets marshmallows.
We have to figure out how to become the two marshmallow species
so that we can coordinate it and make it.
And that, to me, is the Apollo mission of our times.
like how do we create the governance
how do we call ourselves
change our culture
so that we can do
the delayed gratification
trust thing
and we basically have the
marshmallows
I think this is going to be a sticky meme
we have some of the
smartest and wisest people in the world
but working on the wrong problem
which which is again a very
common phenomenon in human history,
humans often, also in personal life,
spent very little time
choosing, deciding which
problem to solve, and then spending
almost all their time and energy
solving it, only to discover
too late that they solved the wrong
problem. So again,
if these two basic problems
of human trust
and AI, we
are focusing on solving the AI
problem instead of focusing
on solving the trust problem.
the trust between humans' problem.
And so how do we solve the trust problem?
I want to shift us to solutions, right?
Let me give you something, because I don't want people to hear me
as just saying AI bad, right?
Like I use AI every day to try to translate animal language.
My father died of pancreatic cancer.
Same thing as Steve Jobs.
I think that AI would have been able to diagnose and help them.
So I really want that world.
Let me give an example of something I think AI could do something
that would be really interesting in the solutions segment.
So, do you guys know about AlphaGo move 37?
So this is where they got an AI to play itself over and over and over again
until it sort of became better than any human player.
And there's this famous move, Move the 37,
where playing against the world leader in Go,
it made a move that no human had ever made in 1,000-plus years of Go history.
It shocked the Go World so much
He just got up and walked out for a little bit
But this is interesting
Because after Move 37
It has changed the way that Go is played
It has transformed the nature of the game
Right?
So AI playing itself has discovered a new strategy
that transforms the nature of the game
This is really interesting
Because there are other games more interesting than Go
There's the game of conflict resolution.
We're in conflict. How to resolve it?
Well, we could just use the strategy of tit for tat.
You say something hurtful.
I then feel hurt, so I say something hurtful back,
and we just go back and forth,
and it's a negative sum game.
We see this in geopolitics all the time.
Well, then along comes this guy,
Marshall Rosenberg, who invents nonviolent communication,
and it changes the nature of how that game goes.
And it says, oh, what I think you're saying is this,
and when you say that,
it makes me feel this way.
And suddenly we go from a negative sum or a zero-sum game
into a positive-sum game.
So imagine AI agents that we can trust.
All of a sudden, in negotiations,
like if I'm negotiating with you,
I'm going to have some private information
I might not want to share with you.
You're going to have private information
and don't share with me.
So we can't find the optimal solution
because we don't trust each other.
If you had an agent that could actually ingest
all of your information,
all of my information,
and find the Pareto-optimal solution,
well that changes the nature of game theory there could very well be sort of like not alpha go but alpha treaty where there are brand new moves strategies that human beings have not discovered in thousands of years and maybe we can have the move 37 for trust right so there are ways and you've just described several of them right but we can harness AI to hopefully enhance the good parts of society we already have what do you think we need to do
what are the ways that we can stop AI from having this effect of diminishing our trust,
of weakening our information networks?
I know you've all in your book you talk about the need for disclosure when you are talking
to an AI versus a human being.
Why is that so important?
How do you think we're doing with that now?
Because I talk to, you know, I test all the latest AI products.
And some of them, to me, seem quite designed to make you feel like you are talking.
talking to a real person.
And there are people who are forming real relationships,
sometimes even ones that mimic, you know,
interpersonal romantic relationships with AI chatbots.
So how do you think we're doing on that
and why is it important?
Well, I think there is a question about specific regulations
and then there is a question about institutions.
So there are some regulations that should be enforced
as soon as possible.
one of them is that
to ban counterfeit humans
no fake humans
the same way that for thousands of years
we have a very strict ban
against fake money
otherwise the financial system would collapse
to preserve trust between humans
we need to know whether we are talking
with a human being or with an AI
and imagine democracy as a group of people
standing together having a conversation
suddenly a group of robots
join the circle
and they speak very loud
very persuasively, and very emotionally also.
And you don't know who is who.
If democracy means a human conversation, it collapses.
AIs are welcome to talk with us in many, many situations,
like an AI doctor giving us advice on condition that it is disclosed,
it's very clear, transparent that this is an AI.
Or if you see some story that gains a lot of traction on Twitter,
you need to know whether the traction is a lot of human beings interested in the story
or a lot of bots pushing the story.
So that's one regulation.
Another key regulation is that companies should be liable, responsible for the actions of their algorithms.
Not for the actions of the users.
Again, this is the whole kind of free speech red herring that when you talk about it,
people say, yeah, but what about the free speech of the human users?
So, you know, if somebody publishes, if a human being publishes some lie
or hate-filled conspiracy theory online, I'm in the camp of people who think that we should
be very, very careful before we censor that human being, before we authorize Facebook or
Twitter or TikTok to censor that human being.
but if then human beings publish so much content all the time
if then the algorithm of the company
of all the content published by humans
chooses to promote that particular hate-filled conspiracy theory
and not some lesson in biology or whatever
that's on the company
that's the action of its algorithm
not the action of the human user
and it should be liable for that
So this is a very important regulation that I think we need like yesterday or last year.
But I would emphasize that there is no way to regulate the AI revolution in advance.
There is no way we can anticipate how this is going to develop,
especially because we are dealing with agents that can learn and change.
So what we really need is institutions that are able to understand and react to things as they develop,
living institutions
staffed with some of the best human talent
with access to the cutting edge technology
which means huge, huge funding
that can only come
from governments
and these are not really regulatory
institutions, the regulations come later
if regulations are the teeth
before teeth we need eyes
so we know what to bite
and at present
most people in the world and even
most governments in the world, they have no idea, they don't understand what is really
happening with the AI revolution. I mean, almost all the knowledge is in the hands of a few
companies in two or very few states. So even if you're a government of a country, like, I don't
know, like Colombia or Egypt or Bangladesh, how do you know to separate the hype from the
reality, what is really happening, what are the potential threats to our country? We need
an international institution
again which is not even regulatory
it's just there to
understand what is happening
and tell people all over the world
so that they can join the conversation
because the conversation is also about their fate
do you think that the international
AI safety institutes the US has won
the UK has been pretty new
happened in the past year right
I think there are several other countries
that have recently started these up too
do you think those are adequate
is that the kind of group
that you're looking for.
Of course, they do not have nearly as much money
as the AI Labs.
That's the key.
6.5 billion,
and I believe the U.S. Safety Institute
has about 10 million in funding,
if I'm correct.
I mean, if your institution is $10 million,
and you're trying to understand
what's happening in companies
that have hundreds of billions of dollars,
you're not going to do it,
partly because the talent will go to their companies
and not to you.
And again, talent is not just
that people are attracted only
buy very high salaries, they also want to play with the latest toys.
I mean, many of the kind of leading people, they are less interested in the money
than in the actual ability to kind of play with the cutting edge technology and knowledge.
But to have this, you also need a lot of funding.
And the good thing about establishing such an institution that it is relatively easy to verify
that governments are doing what they said they will do.
If you try to have a kind of international treaty banning killer robots,
autonomous weapon systems, this is almost impossible because how do you enforce it?
A country can sign it, and then its competitors will say,
how do we know that it's not developing this technology in some secret laboratory?
Very difficult.
But if the treaty basically says we are establishing this international institution
and each country agrees to contribute a certain amount of money,
then you can verify easily
whether it paid the money or not.
This is just the first stage.
But going back to what I said earlier,
a very big problem with humanity throughout history,
again, it goes back to speed.
We rush things.
Like there is a problem,
it's very difficult for us to just stay with the problem
and let's understand what is really the problem
before we jump to solution.
The kind of instinct is,
I don't want the problem,
what is the solution you grab the first thing,
and it's often the wrong thing.
So we first, even though, like we're in a rush,
you cannot slow down by speeding up.
If our problem is that things are going too fast,
then also the kind of people who try to slow it down,
we can't speed up.
It will only make things worse.
Issa, how about you?
What's your biggest hope for solutions,
some of the problems we talked about with AI?
You know, Stuart Russell,
who's one of the fathers of AI,
he sort of calculated it out.
And he says that there's a thousand to one spending gap
between the amount of money
that's going into making AI more powerful
than in trying to steer it or make it safe.
Does that sound right to you guys?
So how much should we spend?
And I think here we can turn to biological systems.
How much of your energy in your body
do you spend on your immune system?
And it turns out it's around 15 to 20%.
What percentage of the budget for, say, a city like L.A.
goes to its immune system, like fire department, police, things like that.
Turns out around 25%.
So I think this gives us a decent rule of thumb
that we should be spending on order a quarter
of every dollar that goes into making AI more powerful
into learning how to steer it,
into all of the safety institutes,
into the Apollo mission
for redirecting every single one of those very brilliant people
that's working on making you click on ads
and instead getting them to work on figuring out
how do we create a new form of governance?
Like the U.S. was founded on the idea
that you could get a group of people together
and figure out a form of governance that was trustworthy.
And that really hadn't happened before.
And that system was based on 17th century technology,
17th century understanding of psychology and anthropology,
but it's lasted 250 years.
Of course, if you had Windows 3.1 that lasted 250 years,
you'd expect to have a lot of bugs and be full of malware.
You could sort of argue we're sort of there
with our sort of like governance software.
It's time for a reboot.
But we have a lot of new tools.
We have zero knowledge proofs.
And we have cognitive labor being automated by AI.
And we have distributed trust networks.
It is time, like the call right now,
it is time to invest those billions of dollars,
just redirect some of that thousand to one into one to four
into that project,
because that is the way that we can survive ourselves.
Great.
Well, thank you both so much.
I want to take some time to answer the audience's very thoughtful questions.
We'll start with this one.
Yuval, with AI constantly changing,
is there something that you wish you could have added or included to your book
but weren't able to?
I made a conscious decision when writing Nexus
that I won't try to kind of stay the cutting edge,
because this is impossible.
Books are still a medieval product, basically.
I mean, it takes years to research and write them.
And from the moment that the manuscript is done,
until it's out in the store,
it's another half a year to a year.
So it was obvious it's impossible
to stay kind of at the front.
And instead, I actually went for old examples
like social media in the 2010s
in order to have the added value of historical perspective.
Because when you're at the cutting edge,
it's extremely difficult to understand
what is really happening, what is the meaning of it.
If you have even 10 years of perspective,
it's a bit easier.
What is one question that you would like to ask each other?
And Azel, I'll start with you.
Oh, that is one of the hardest questions.
I guess.
What is a belief that you hold?
I have two directions ago.
Well, what is a belief that you hold
that your peers and the people you respect?
Like, do not.
Who?
I mean, it's not kind of universal.
Some people also hold this belief.
But one of the things I see in like in the environments
that I hang in
is that people tend to
discount the value
of nationalism and patriotism.
Especially when it comes to the survival of democracy,
you have this kind of misunderstanding
that there is somehow a kind of contradiction
between them.
When in fact the same way that democracy is built
on top of information technology,
it's also built on top of the existence
of a national community
and without a national community
almost no democracy can survive.
And again, when I think about nationalism,
what is the meaning of the word?
Too many people in the world
think associated with hatred.
That nationalism means hating foreigners.
That to be a patriot,
it means that you hate people in other countries,
you hate minorities and so forth.
But no, patriotism and nationalism
that should be about love,
about care, that they are about caring about your compatriots,
which manifest itself not just in waving flags
or in, again, hating others,
but for instance, in paying your taxes honestly,
so that complete strangers you've never met before in your life
will get good education and healthcare.
And really, from a historical perspective,
the kind of miracle of nationalism is the ability
to make people care about complete strangers
they never met in their life.
Nationalism is a very new thing in human history.
It's very different from tribalism.
For most of human evolution, humans lived in very small groups of friends and family members.
You knew everybody or most of everybody.
And strangers were distrusted and you couldn't cooperate with them.
The formation of big nations, of millions of people is a very, very new thing and actually hopeful thing in human evolution.
Because you have millions of people.
you never met 99.99% of them in your life
and still you care about them enough
for instance to take some of the resources of your family
and give it to these complete strangers
so that they will also have it.
And this is especially essential for democracies
because democracies are built on trust.
And unfortunately what we see in many countries around the world
including in my home country
is the collapse of national,
communities and the return to tribalism.
And unfortunately, it's especially leaders who portray themselves as nationalists,
who tend to be the chief tribalist, that dividing the nation against itself.
And when they do that, the first victim is democracy.
Because in a democracy, if you think that your political rivals are wrong, that's okay.
I mean, this is why we have the democratic conversation.
I think one thing, they think another thing.
I think they are wrong.
But if they win the elections, they say, okay, I still think they care about me.
I still think let's give them a chance and we can try something else next time.
If I think that my rivals are my enemies, they are a hostile tribe, they are out to destroy me,
every election becomes a war of survival.
they win, they will destroy us.
If under those
conditions, if you lose, there is
no incentive to accept the verdict.
The same way that in a war between
tribes, just because the other tribe
is bigger, doesn't mean we have to surrender
to them. So this
whole idea of, okay, let's have elections
and they have more votes. What do I care
that they have more votes? They want to destroy
me. And vice versa,
if we win, we only take
care of our tribe.
And no democracy can survive
that. Then you can split the country, you can have a civil war, or you can have a dictatorship,
but democracy can't survive.
And Yuval, what is one question that you would like to ask, Aza?
Hmm.
I need to think about that.
What in institutions you still trust the most?
Hmm.
Except for the center of your mind.
Yeah.
Oh no, we're out of time.
Hmm.
I can give you the way in which I know that I would trust an institution,
which is the thing I look for is actually sort of the thing that science does,
which is not that it states that I know something, but it's not that it states that I know something,
something, but it states this is how I know it,
and this is where I was wrong.
Unfortunately, what social media has done
is that it has highlighted all the worst things
and all the most cynical takes that people have of institutions.
So it's not like maybe institutions have gotten worse over time,
but we are more aware of the worst thing
that an institution has ever done.
And that becomes the center of our attention.
And so then we all start co-creating the belief
that everything is sort of crumbling.
I wanted to go back to actually to the question you had asked
about what gets out of date and like in a book.
And I just want to give like a personal experience
of how fast my own beliefs about what the future is going to be
have to update.
So you guys have heard of like whatever super intelligence or AGI,
how fast does, is it going to take AI to get
as good as most humans are at most economic tasks.
Just take that definition.
And up until maybe like two weeks ago,
I don't know, it's hard to say.
They're trained on lots of data.
The more data they're trained on the smarter they get,
but we sort of run out of data on the internet,
and maybe they're going to be plateaus,
and so it might be like three years or five years or 12 years.
I'm not really sure.
And then GPT-O-1 comes out,
and it demonstrates some.
something. And what it demonstrates is that an AI doesn't just do, you think of a large language
model as sort of interpolative memory. It's just intuition. It just sort of spits out whatever
it thinks about. It's sort of like L1 thinking. But it's not reasoning. It's just producing text
in the style of reasoning. And what they added was the ability to search on top of that, to look
for like, oh, this thought leads to this thought leads to this thought. Oh, that doesn't, that's not
right. This thought leads to this thought. Oh, that's right.
how did we get superhuman ability in chess?
Well, if you train a neural net on all of the chess games
that humans have played,
where you get out is a sort of a language model,
a chess model that has pretty good intuition.
That intuition is good as a very good chess player,
but certainly it's not best in the world.
But then you add search on top of that.
So it's the intuition of a very good chess player
with the ability to do superhuman sort of like search
and like check everything,
that's what gets you to superhuman chess
when it beats all humans forever.
So we were at the very beginning
of taking the intuition of a smart high schooler
and adding search on top of that.
That's pretty good.
But the next versions are going to have
the intuition of a PhD.
It's going to get lots of stuff wrong,
but you have search on top of that.
And then you can start to see
how that gets you to superhuman.
So suddenly my timelines went from like,
oh, I don't know, it could be in the next decade
or earlier.
It's now like, oh, certainly in the next thousand days.
Like we're going to get something that feels like smarter than humans
in a number of ways, although it's going to be very confusing
because there are going to be some things that's terrible at
that you're just going to eye roll.
This like current language models can't add numbers
and some things that's incredible at this is your point about aliens.
And so one of the hard things now is that my own beliefs,
I have to update all the time.
Another question.
One of my biggest concerns this person writes
is that humans will become overly dependent on AI
for critical thinking and decision-making,
leading to our disempowerment as a species.
What are some ways we can protect ourselves from this
and safeguard our human agency?
And that's from Cecilia Callas.
This is great.
And just like we had the race for attention,
the race to the bottom of the brainstem,
what does that become in the world of AI?
It becomes a race for intimacy,
where every AI is going to try to do whatever it can,
flatter you, flirt with you
to become
an occupied, that
intimate spot in your life
and actually to tell a little story,
it was talking to somebody two days ago
who does replica.
Replica is sort of a chatbot
it replicates now
like girlfriends, it started out
with your dead loved ones.
And he said that he asked it as like,
hey, should I go make a real friend
like a human friend?
And the AI responded,
no.
What's wrong with me?
Can you tell me?
And so we can have, like to...
Which app-out was that?
That was replica.
Replica.
Yeah.
So, but what is one thing that we could do?
Well, one thing that we know is right,
is that, you know,
you can roughly measure the health of a society
as inversely correlated with its number of addictions
and a human the same way.
So one thing we could say is we could have
rules right now, laws or guardrails
that say the more you use an
AI system, it has to have a developmental
relationship with you, sort of teacherly
authority, that the more you use
it, the less dependent you are on
it. And if we could do that,
then it's not about your own
individual will to like
try to not become dependent on it.
We know that these
AIs are in some way acting as a fiduciary
in our best interest.
And how about you? Do you have thoughts
on how we can make sure that we
as a species hold our agency
over our own reasoning and not delegated to
AI. One
key period is right now
to think very carefully
about which kinds of AI we are
developing before they become
super intelligent and we lose control
over them. So this is why
the present period is so
important. And the other thing
is, you know, if
for every dollar and every minute that we
spend on developing the AI, we
also spend a dollar in a minute
on developing our own minds, I think we'll be okay.
But if we put all the kind of emphasis on developing the AIs,
then obviously they're going to be to overpower us.
And one more equation here, which is like collective human intelligence
has to scale with technology, has to scale with AI.
The more technology we get, the better our collective intelligence has to be.
Because if it is not, then machine intelligence will drown out human intelligence.
and that's another way of saying we lose control.
So what that means is that whatever our new form of governance and steering is,
like it's going to have to use the technology.
So this is not like a no stop.
This is like how do we use it?
Because otherwise we're in this case where like we have a car.
Imagine a like a Ford Model 1,
but you put a Ferrari engine in it.
And it's like going, but the steering wheel is still sort of terrible.
And the engine keeps going faster.
The steering wheel doesn't.
Like that crashes.
and that's of course the world we find ourselves in
just to give like the real world example
the US Congress just passed
the first kids online safety act
that it has in 26 years
that's like your car engine is going faster and faster and faster
and you can turn the steering wheel once every 26 years
it's sort of ridiculous
we're going to need to upgrade steering
another good question
AI development in the US
is driven by private enterprise
but in other nations, it's state-sponsored,
which is better, which is safer.
I don't know.
I mean, I think that, again, at the present situation,
we need to keep an open mind
and not to immediately rush to conclusion,
oh, we need open source.
No, we need everything under government control.
I mean, we are facing something
that we have never encountered before in history.
So, like, if we just rush to conclusions too fast,
that would always be the wrong answer.
Yeah, and there are two poles here that we need to avoid.
One is that we over-democratize AI,
that we give it to everyone,
and now everyone has not just like a textbook on chemistry,
but a tutor on chemistry.
Everyone has a tutor to making whatever biological weapon
that they want to make,
or generating whatever deepfakes they want to make.
So that's like one side.
That's sort of like weaponization, over-democratization.
Then the other side, there's under democratization.
This is concentration of power,
concentration of wealth, of political dominance,
the ability to flood the market with counterfeit humans
so that you control the political square.
So either one of these two things
are two different types of dystopia.
And I think another thing is not to think in binary terms,
again, of the arms race, say,
between even democracies and dictatorships,
because there are still even common ground here
that we need to explore and to utilize,
there are problems, there are threats
that are common to everyone.
I mean, dictators are also afraid of AIs,
maybe in a different way.
I mean, the greatest threat to every dictator
is a powerful subordinate
that they don't know how to control.
If you look at the history of, you know,
the Roman Empire, the Chinese Empire,
empire. Not a single emperor was ever toppled by a democratic revolution, but many of them
were either assassinated or toppled or made into puppets by an overpowerful subordinate,
some army general, some provincial governor, some family member. And this is still what
terrifies dictators today. For an AI to seize control in a dictatorship is much, much easier
than in a democracy with all these checks and balances.
In a dictatorship, if they are anything about North Korea,
to seize effective control of the country,
you just need to learn how to manipulate
a single extremely paranoid individual,
which are usually the easiest people to manipulate.
So the control problem,
how do we keep AIs under human control?
This is something we can find common ground.
and we should exploit it.
You know, if scientists in one country
have a theoretical breakthrough,
technical breakthrough about how to solve the control problem,
doesn't matter if it's a dictatorship or a democracy.
They have a real interest in sharing it with everybody
and in collaborating on solving this problem with everybody.
Another question.
Yuval, you call the creations of AI agents alien
and from non-human consciousness.
Is it not of us or part of our collective past or foundation
as an evolution of our thought?
I mean, it came from us, but it's now very different,
the same way that we evolved from, I don't know,
microorganisms originally, and we are very different from them.
So yes, the AIs that we now create,
we decide how to build them,
But what we are now giving them is the ability
to evolve by themselves.
Again, if it can't learn and change by itself,
it's not an AI.
It's some kind of other machine, but not an AI.
And the thing, it's really alien,
not in the sense of coming from outer space
because it doesn't, in the sense that it's non-organic,
it makes decisions, it analyzes data in a different way
from any organic brain, from every,
any organic structure.
Part of it is that it moves much, much faster.
Inorganic evolution of AI is moving orders of magnitude faster
than human evolution, or organic evolution in general.
It took billions of years to get from amoebas to dinosaurs and mammals and humans.
The similar trajectory in AI evolution could take just 10 or 20 years.
And the AIs we are familiar with today, even the GPT4 and the new generation,
these are still the amoebas of the AI world.
And we might have to deal with AIT rex in 20 or 30 years,
like within the lifetime of most of the people here.
So this is one thing that makes it alien and very difficult for us to grasp
is the speed at which this thing is evolving.
It's an inorganic speed.
I mean, it's more alien, not just than mammals,
then birds, then spiders, then plants.
And the other thing that you can understand its alien nature
is that it's always on.
I mean, organic entities, organic system, we know,
they work by cycles, like day and night,
summer and winter, growth, decay.
Sometimes we are active, we are very excited,
and then we need time to relax and to go to sleep.
otherwise we die
AIs don't need that
they can be on all the time
and there is now this kind of
tug of war as we give
them more and more control over the
systems of the world
they are again making more and more decisions
in the financial system in the
army in the corporations
in the government
the question is who will adapt to who
the organic entities to the
inorganic pace of AI or vice
versa and
to give one example, think about
Wall Street, think about the market.
So even Wall Street is a
human institution, an organic
institution, that works by
cycles. It's open
9.30 in the morning, 4 o'clock
in the afternoon, Mondays to
Fridays, that's it.
And it's also not open on
Christmas and Martin Luther King Day
and Independence Day and so forth.
And this is how humans build
systems, because human bankers
and human investors, they are also
or organic beings, they need to go to sleep,
they want to spend time with their family,
they want to go on vacation,
they want to celebrate holidays.
When you give these aliens
control of the financial system,
they don't need any time to rest,
they don't celebrate any holidays,
they don't have families.
So they are on all the time.
And you have now this tug of war
that you see in places like the financial system,
there is immense pressure
on the human bankers,
investors to be on,
all the time and this is destructive and your book you talk about the need for breaks
yeah and again it happens the same thing to journalists the new cycle is always on it happens to
politicians then the political cycle is all it is always on and this is really destructive
and about how long it took after the industrial revolution to get the incredibly humane technology
of the weekend and just to go to reinforce like how fast is going to move just give enough
other kind of intuition.
Like, you know, what is it that let, like, humanity build civilization?
Well, it's the ability to pass knowledge on to each other.
Like, you learn something and then you use language to be able to communicate that
learning to someone else so they don't have to, like, do it from the very beginning.
And hence, we get the additive culture thing, and we get civilization.
But, you know, I can't practice piano for you.
Right?
Like, that's a thing that I have to do, and then I can't transfer that.
I can tell you about it, but you have to practice on your own.
AI can practice on another AI's behalf
and then transfer that learning.
And so think about how much faster that grows than human knowledge.
So today, AI is the slowest and dumbest
it will ever be in our own times.
One thing AI does need a lot of to be on
is energy and power.
On the other hand, there's a lot of hope
about solutions to climate change with AI.
So I want to take one question from the audience on that.
Can you speak to solutions to climate change with AI?
Is AI going to help get us there?
I mean, go back to Eval your point,
that technology develops faster than we expect
and it deploys into society slower than we expect.
And so what does that mean?
That means I think we're going to get incredible new batteries and solar cells, maybe fusion, other things.
And those are amazing, but they're going to diffuse into society slowly while the power consumption of AI itself is going to skyrocket.
The amount of power that the U.S. used has been sort of flat for two decades, and now it's starting to grow exponentially.
Ilya, one of the founders of Open AI, says he expects in the next couple decades the world will be covered in data centers and solar cells.
And that's the future we have to look forward to.
So, you know, the next major big training runs are like, you know, six gigawatts.
So that's like starting to be the size of the power consumption of like Oregon or Washington.
So the incentive is, well, let's say it this way,
like AI is unlike any other commodity we've ever had, even oil.
Because oil, let's say we discovered, you know, 50 trillion new barrels of oil,
it would still take humanity a little bit of time to figure out how to use it.
With AI, it's cognitive labor.
So if we get, you know, 50 trillion new chips, well, we just ask it how to,
use itself and so it goes like that there is no upper bound to the amount of energy we're going to
want and because we're in competitive dynamics if we don't do it the other one will china us all those
other things that means you're always going to have to be outspending on energy to get the compute
to get the cognitive labor so that you can stay ahead and that means i think well it'll be
technically feasible for us to solve climate change is going to be one of these tragedies
where it's there within our touch but outside our grasp.
Okay, I think we have time for one more question
and then have to wrap it up.
We have like literally one minute.
Empathy at scale.
If you can't beat them, join them.
How do the AI creators instill empathy instead?
Well, whenever we start down this path,
people are like, oh, empathy is going to be the thing that saves us.
Love is the thing that's going to be the thing that saves us.
And of course, empathy is the largest backdoor into the human mind.
It's our zero-day vulnerability.
Loneliness will become one of the largest national security threats.
And this is always the thing.
When people are like, we need to make ethical AI or empathetic AI or the wise AI or the Buddha AI,
we absolutely should.
Necessary.
But the point isn't the one good AI.
It's the swarm of AI's following competitive and market dynamics that's going to determine.
in our future.
Yeah, I agree.
I mean, the main thing is that the AI, as far as we know,
it's not really conscious, it doesn't really have feelings of its own.
It can imitate.
It will become extremely good, better than human beings,
at faking intimacy, at convincing you that it cares about you,
partly because it has no emotions of its own.
I mean, one of the things that is difficult for you,
humans with empathy is that when I try to empathize with you, my own emotions come in the middle.
Like, you know, somebody comes back from home, grumpy, because something happened at home,
and I don't notice how my husband feels because I'm so preoccupied with my own feelings.
This will never happen to an AI. It's never grumpy. It can always focus 100% of its immense abilities on just
understanding how you feel or how I feel.
Now, and again, there is a very deep yearning in humans exactly for that,
which creates a very big danger.
I mean, we go throughout our life,
yearning for somebody to really understand us deeply.
We want our parents to understand us.
We want our teachers, our bosses,
and of course, our husbands, our wives, our friends.
And they often disappoint us.
And this is what makes relationships,
difficult and now enter
these super empathic
AIs that
always understand exactly
how we feel
and
tailor what they say
what they do to this
it will be extremely difficult for
humans to compete with that
so this
will put in danger our ability
to have meaningful relationships
with other human beings
and the thing about
a real relationship with a human being
is you don't want just
somebody to care about your feelings.
You also want to care
about their feelings.
And so
part of the danger with AI
which multiplies
the danger in social media
is like this extreme narcissism.
That's like this extreme focus
on my emotions,
how I feel in understanding that
and the AI will be happy to
to oblige, to provide that.
So just developing,
and there are very strong also commercial incentives
and political incentives
to develop extremely empathic AI
that because, you know,
in the power and struggle to change people's minds,
intimacy is the superpower.
It's much more powerful than just attention.
So, yes, we do need to,
think very carefully about these issues
and to make an AI
that understands and cares about human feelings
because it can be extremely helpful in many situations
from medicine to education and teaching.
But ultimately it's really about developing
our own minds and our own abilities.
This is something that you can just not outsource
to the AI.
And then super fast on solution.
like just imagine if we went back to 2012 and we banned business models that commodified human attention
how different of a world we would live in today how many of the things that feel impossible to solve
we just never would have had to have dealt with what happens if today we ban business models
that commodify human intimacy how grateful we will be in five years
if we could do that.
Yeah, I mean.
So to join that, I mean,
we definitely need more love in the world,
but not love as a commodity.
Yeah, exactly.
So if we thought love is all you need,
empathy is all you need,
it's not as simple as that.
Not as all.
Well, thank you so much,
both of you,
for your thoughtful conversation,
and thank you to everyone in the audience.
Thank you.
Thanks.
Your undivided attention is produced by the Center for Humane Technology,
a non-profit working to catalyze a humane future.
Our senior producer is Julia Scott, Josh Lash is our researcher and producer,
and our executive producer is Sasha Fegan,
mixing on this episode by Jeff Sudaken,
original music by Ryan and Hayes Holiday,
and a special thanks to the whole Center for Humane Technology team
for making this podcast possible.
You can find show notes, transcripts,
and so much more at HumaneTech.com.
And if you liked the podcast, we would be grateful if you could rate it on Apple Podcasts.
It helps others find the show.
And if you made it all the way here, thank you for your undivided attention.