On with Kara Swisher - Yuval Noah Harari on AI & the Future of Information
Episode Date: October 14, 2024Why, despite being the most advanced species on the planet, does it feel like humanity is teetering on the brink of self-destruction? Is it just our human nature? Israeli philosopher and historian Yuv...al Noah Harari doesn’t think so — he says the problem is our information trade. This is the focus of his latest book, Nexus: A Brief History of Information Networks from the Stone Age to AI. Harari explores the evolution of our information networks, from the printing press to the dumpster fire of information on social media and what it all means as we approach the “canonization” of AI. In this episode, Kara and Harari discuss why information is getting worse; how fiction fuels engagement; and why truth tends to sink in the flood of information washing over us. Vote for Kara as Best Host in the Current Events for Signal’s Listener’s Choice Awards here: https://vote.signalaward.com/PublicVoting#/2024/shows/craft/best-host-current-events Questions? Comments? Email us at on@voxmedia.com or find Kara on Threads/Instagram @karaswisher Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Support for this show comes from ServiceNow, the AI platform for business transformation.
You've heard the big hype around AI.
The truth is, AI is only as powerful as the platform it's built into.
ServiceNow is the platform that puts AI to work for people across your business,
removing friction and frustration for your employees,
supercharging productivity for your developers,
providing intelligent tools for your service agents to make customers happier, all built into a single platform you can use right now.
That's why the world works with ServiceNow. Visit servicenow.com slash AI for people to learn more.
Support for the show comes from Zelle. Listen, by now, most of you are not to send money to that so-called foreign prince.
But what about when the bank sends a text alert that your account has been compromised
and you have to click this link to secure it?
Think twice, actually.
Don't think at all.
Don't click it.
Scammers aren't new, but the sophisticated ones are getting savvier about how to separate
you from your money.
Stay vigilant and educate yourself about the tools and techniques
these criminals have at their disposal.
Learn more at zellpay.com slash safety.
Support for On With Kara Swisher
comes from Virgin Atlantic.
Trains, planes, and automobiles.
Besides being a good movie,
they are an essential part of our travel experience.
But we don't want just essentials.
We want to have a bit of luxury doing it. Virgin Atlantic takes the VIP treatment to the next level.
With a private wing to check in and your own security channel at London Heathrow,
you can glide from car to clubhouse lounge, a destination in its own right, in 10 minutes or
less. On board, you'll find a dedicated bar and social space and your own private suite to stretch
out in, with lots of space to store all your bits and bobs, a lay flat bed and delicious dining from beginning to end. Just be sure to
leave room for dessert. Their mile high tea with all the little cakes and sandwiches is a show
stopper. Check out virginatlantic.com for your next trip to London and beyond and see for yourself
how traveling for business can always be a pleasure.
Hi, everyone from New York Magazine and the Vox Media Podcast Network.
This is On with Kara Swisher and I'm Kara Swisher.
My guest today is Israeli historian and philosopher Yuval
Noah Harari, who is out with his latest book, Nexus. It's called A Brief History of Information
Networks from the Stone Age to AI. It's kind of a third in a sequel after Sapiens, A Brief History
of Humankind, and Homo Deus, A Brief History of Tomorrow. Let me be clear, none of them are brief.
And once again, Harari is covering
a lot of ground and he's breaking ground. He's not just looking backwards, he's analyzing the present
complicated paradigm shift we find ourselves in with AI and emerging information network
and helping us understand where it could lead us in the future. Spoiler alert, according to him,
it doesn't look good. But he makes a lot of suggestions based on historical experience
of how we can prevent the worst-case scenario, the collapse of democracy.
Oh, that.
The AI revolution is obviously something many people, including me, are following very closely.
And strangely, despite his apocalyptic vision of the AI future,
Harari is revered in Silicon Valley, maybe not after this book.
Daniel Imarwar from The Atlantic has said Harari is revered in Silicon Valley. Maybe not after this book. Daniel Imarwar from The Atlantic
has said Harari is to the tech CEO what David Foster Wallace once was to the Williamsburg hipster.
Let's see if that holds true after he's done raining on their parade. My conversation with
Yuval was taped at City Arts and Lectures in San Francisco. It was a packed audience. And again,
thank you so much to a terrific organization in my amazing city. It's a packed audience. And again, thank you so much to a terrific organization in
my amazing city. It's a really important addition to this ongoing conversation we're having about
the impact of tech, especially AI, on society. One more thing, I've been nominated for Best Host
in the Signal Listener's Choice Awards. If you agree that I'm best, and why wouldn't you,
you can vote at vote.signal.com or follow the link in the show
notes. Thanks so much. Now let's get to it. So welcome to City Arts and Lectures. It's great
to see you again, and thanks for joining me. Thank you. It's really good to be here with you.
So we're here to talk about your latest book, Nexus,
a brief history of information networks from the Stone Age to AI.
It's not brief. It's quite a large and long book.
From the Stone Age to AI, like 400 pages, it's quite brief.
All right. Okay, fine. Okay. All right. Okay, if you say so.
Your first bestseller, Sapiens, obviously, a brief history of humankind.
I like to use brief all the time, came out in 2011.
And then you wrote Homo Deus, A Brief History of Tomorrow in 2015.
So let's talk about what you were trying to do with Nexus and how it relates to these other books that you wrote.
So, I mean, it picks up where I left off with Sapiens and with Homo Deus.
up where I left off with sapiens and with homo deus
the two main questions
at the basis of the book is
first of all if humans
are so smart why are we
so stupid
we've managed to take
over this planet to reach the
moon to split the atom to
decipher DNA and here we are
on the verge of destroying
ourselves and much of the ecological system with
us in several different ways. Could be ecological collapse, could be nuclear war, could be the rise
of AI. So what's wrong with us? And you know, this is a question that has bothered people throughout
history. And you have a lot of answers in mythologies and theologies that tells us that there is something wrong with us.
There is something wrong with human nature.
And Nexus takes a different view.
The problem is not in our nature.
The problem is in our information.
If you give good people bad information, they will make bad decisions and self-destructive decisions.
And then the book goes
on to explore the long-term history of information. Why isn't information getting better? I mean,
you would have expected that over thousands of years, our information will get better,
and it really doesn't. Large-scale modern societies are as susceptible to mass delusion and really insanity, like mass insanity,
as Stone Age tribes. If you think about the 20th century with Stalinism and Nazism, I mean,
far more powerful, of course, than Stone Age tribes, a lot more sophisticated information
technology, but still can be extremely delusional.
And when you look at the world today,
we have the most sophisticated information technology in history,
and people are unable to hold a rational conversation anymore.
You know, there's a technology term for that,
which is very technical, which is garbage in, garbage out, right?
And I think it explains quite a bit of it, term for that which is very technical which is garbage in garbage out right um and i think a lot
it explains quite a bit of it in a really kind of very easy simple way but the point you're making
is this is historical this is a historical thing it's just more sophisticated in the garbage that
we're making for ourselves correct but the garbage is is not accidental and it's it's it's effective in one important thing it's effective in creating order and the
one of the key ideas of the book is that humans control the world because we can cooperate in
larger numbers than any other animals and to create a large-scale cooperative system, you need two things which pull you in different directions.
You need to know some truth about the world, but you also need to preserve order between large
numbers of people, thousands, millions, billions. And the truth is not the easiest way to do it. In most cases, fiction and fantasy
and downright lies
are more effective
in creating order among people.
Obviously, AI is very hot right now,
but, you know, in Homo Deus,
you discuss the benefits and dangers
of new information technology,
especially if humans lose control.
And you write in Nexus that since Homo Deus,
your conversations about AI have gone from being idle philosophical speculations to having, quote,
focused intensity of an emergency room. What is that conversation right now from your perspective?
About AI? I mean, there are two conversations. One is simply understanding what we are facing.
What is AI? Why is it dangerous? What is the threat? And a lot of people have difficulty
grasping it. It's not like nuclear weapons, that the danger was obvious, a nuclear war
which will just kill everybody. What's the danger in AI?
And Hollywood, I think,
did us a big service
in focusing attention on that issue
long before anybody else thought about it,
but it focused on the wrong scenario.
If you think about the Terminator or the Matrix,
the focus is on the big robot rebellion.
Right.
The moment the AIs rebellion right the moment the
ai's decide to take over the world and wipe us out and this is unlikely to happen anytime soon
because ai doesn't have the kind of broad intelligence necessary to build a robot army
and take over a country or the whole world. And this makes people feel complacent.
And I think one of the key issues in the conversation about AI
is to explain that it's not about
the big robot rebellion.
It's more about the AI bureaucrats.
It will take the world from within
and not by rebelling from outside or from below.
I mean, AI, yes, it's not a general intelligence,
but it doesn't need to be.
Within a bureaucracy,
you need a very kind of neurointelligence
to gain enormous power.
If you think about a lawyer, for instance, as an example,
if you drop a lawyer in the middle of the savannah,
he or she are not particularly powerful entities.
They are much weaker than a lion or an elephant or an hyena.
But within a bureaucracy, a lawyer could be more powerful than all the lions of the world put together.
Right.
And this is where AI is stepping in.
where AI is stepping in. We now have millions of AI bureaucrats increasingly making important decisions about us, about the world. You apply to a bank to get a loan, it's an AI deciding.
You apply for a job. In wars, increasingly it's AI choosing the targets. So one part of the
conversation is to understand what is the threat and we can explore that. The other part of the conversation is to understand what is the threat and we can explore that the
other part of the conversation is of course what to do about it and here i think i mean almost
everybody after a certain amount of time agree that the the best thing would be to simply slow
down no not everybody does because of an argument that i'll get to immediately i mean they will say
ideally yes it would be good to slow down because it would give human society time to adapt but we
can't slow down we can't afford to slow down because then the bad guys are competitors here
or across the ocean they will not slow down and they will win the race, so we must run faster.
And then you have this kind of paradox that basically they are telling us we have to run faster because we can't trust the humans, but we think we'll be able to trust the AIs.
Right, right, exactly.
And, you know, you and I had this discussion with Daniel Kahneman, the late Daniel Kahneman, about the decision-making of AI versus decision-making of humans. I don't know if you remember that. But one of the things that I
think is interesting to explore at the beginning, though, is the historical, what's happened
historically with much lesser information systems that humans construct. So the tacit assumption of
this information age is what we call, I think all ages have been information ages in a lot of ways. As you said, that more information will lead to the truth, which will
lead to more wisdom, more openness, a better world. Obviously, that's naive, and you discuss that.
And you define the information instead as something that creates new realities by connecting
different points of view into a network. But as a philosopher, do you see any danger in separating information from
truth? Most information is not truth. The truth is a very small subset of all information in the
world. The truth is, first of all, it's costly. If you want to write a truthful account of anything,
you need to research, you need to gather evidence, to fact-check, to analyze. That's very costly in terms of time and money and effort.
Fiction is very cheap.
You just write the first things that come to your mind.
The truth is also often very complicated because reality is complicated.
Whereas fiction can be made as simple as you would like it to be,
and people prefer, usually, simple stories.
And finally, the truth can be
painful. Whether the truth about us personally, my relationships, what I've done to other people,
to myself, or entire nations or cultures. Whereas fiction, you can make it as attractive,
as flattering as you would like it to be. So in this competition between truth and fiction and fantasy,
truth is at a huge disadvantage.
If you just flood the world with information...
Which was the point of a lot of people.
Most information will not be the truth.
And in this ocean of information,
if you don't give the truth some help,
some edge,
the truth tends to sink to the bottom,
not rise to the top right and we see
it again and again throughout history every time people invent a new information information so
it's really interesting because i was just listening reading um rachel meadows prequel
where she talks about joseph goebbels talking about this like if you flood the zone with
with bad information or just anything it washes washes out the truth. And of course,
Timothy Williams said, the truth is at the bottom of bottomless well. You can never get to it.
And someone more modern, Steve Bannon, if you, I read Steve Bannon, I'm sorry, I do it for the
rest of you, but he talks about it incessantly, the idea of flooding the zone, because that's
the best way to confuse and distract people. So we're used to equating new technology with scientific progress, this virtuous circle,
like the case of the printing press. And this was a really interesting part of your book. So let's
go a little bit in history. You noted that when the Gutenberg press was invented, instead of
science, such as Copernicus's On the revolutions of the heavenly spears one of the
founding documents of science of the scientific tradition you wrote the real bestseller of the
15th century was a book called the hammer of witches yeah talk a little about the hammer of
witches okay it was a bestseller huge bestseller yeah huge bestseller so um basically especially
in a place like this,
in San Francisco and Silicon Valley,
you talk with people about information,
they will eventually reach out to history
to give you some historical analogies.
And they will tell you something like the same way
that the printing press caused the scientific revolution.
So what we are doing will create a new scientific revolution.
Right.
And the thing is the printing press did not cause the scientific revolution.
You have about 200 years from the time that Gutenberg brings print technology to Europe
in the middle of the 15th century until the flowering of the scientific revolution
with Newton and Leibniz and all these people in the middle of the 17th century.
Newton and Leibniz and all these people in the middle of the 17th century.
During these 200 years, you have the worst wave of wars of religion in European history.
Because of the printing. The printing press enabled it in a way I'll explain in a moment.
And also the worst wave of witch hunt.
The big European witch hunt craze is not a medieval phenomenon.
It's a modern phenomenon that was helped along by the printing press.
Medieval Europeans didn't worry much about witches.
They thought they existed, but they didn't think they were dangerous.
Then around the time that Gutenberg comes with the press,
you have a small group of people that comes up with this conspiracy theory
that witches are not individuals with magical powers that can brew love potions and help you
find lost treasure there is a global conspiracy led by satan with agents in every town and every
village that wants to destroy humanity so q anon is what you're talking basically
yes i mean there is a direct line it's it there is a direct line i mean the ideas are coming from
there and at first nobody paid it much much attention then there was one of one of the people
who popularized it was a man called heinrich kramer. He started a do-it-yourself
witch hunt in the Alps, what is today Austria. But he was stopped by the church authorities
who thought he was crazy and expelled. Yes. And he took revenge through the printing press.
He wrote this book, The Hammer of the Witches, Maleus Maleficarum, which was, again, a do-it-yourself manual to how to identify and kill witches.
And many of the ideas we still associate today with witches,
for instance, that witches are mostly women, that they engage in wild sexual orgies,
this came to a large extent from the deranged imagination of Heinrich Kramer.
Now, just to give you a taste of this book,
there is an entire section in the book
about the ability of witches to steal penises from men.
With all kinds of evidence, you know, it's all evidence-based.
I heard from some people that, so for instance,
he brings a story, evidence, of a man who wakes up one morning to find
out that his penis is gone ah and so he suspects the local witch yeah he goes to the house of the
local witch right and forces her to bring him back his penis and the witch says okay okay climb this
tree and he climbs the tree and at the top of the tree he finds a bird's nest where the witch says, okay, okay, climb this tree. And he climbs the tree. And at the top of the tree, he finds a bird's nest
where the witch keeps all the penises
she stole from different men.
And she says, okay, you can take your own.
And the man, of course, takes the biggest one.
And the witch tells him, no, no, no,
you can't take this one.
This belongs to the parish priest.
Oh, okay, good. Good for him, no, you can't take this one. This belongs to the parish priest. Oh, okay.
Good for him.
Now, you can understand why this book sold a few more copies
than Copernicus's on the revolutions of the heavens.
I don't think there's penises in that.
Yeah.
Now, this all sounds very, very funny.
Yeah, but it wasn't.
Until you realize the consequences.
Right.
Tens of thousands of people
were killed in the most horrendous way during the european witch hunt craze and then to give just
one example to to balance the funny story about the penises so in munich they arrested an entire
family the poppenheimer family accusing them of being part of this global conspiracy of witches.
And it's a father, mother of three kids.
And they start by torturing the 10-year-old boy,
the youngest of the family.
And you can still read the protocol of the interrogation in the archives in Munich today.
And they torture a 10-year-old boy in unspeakable ways
until he incriminates his mother.
He admits, yes, she's a witch.
She went to meet Satan.
She kills babies.
She summons hailstorm.
And they break the whole family.
And they execute all of them, including
the 10-year-old boy.
And, you know, this eventually
also reaches to America, the Salem
witch hunts.
And this was, again, this was fueled,
not caused by, of course,
it's not that the printing press caused it,
but the printing press enabled it to spread.
For the same reason that fake news
and conspiracy theories spread today.
And then the people that create the printing presses
don't take responsibility for it in modern day.
Yeah, I've been in that conversation.
I mean, and the thing is, that conversation happened centuries ago.
Right.
And today you again have millions of Americans basically believing in the same story about a global conspiracy led by Satan.
Right.
To destroy civilization.
Right.
And if you look, what stopped it last time?
How did we, in the end, get to the scientific revolution?
It wasn't the technology of the printing press.
It was the creation of institutions
that were dedicating to sifting through
this kind of ocean of information
and all these stories about witches and so forth
and developing mechanisms to evaluate reliable information.
And to be trusted by the population, correct?
Which is very easily fooled.
Yeah.
Right.
And to spread not the information people are most likely to buy,
but the information which is most likely to be true.
Right.
You know, when you send a manuscript to a scientific publisher,
to be published as a paper or as a book,
so they want to know not only will this sell a lot of books or a lot of copies,
is it true?
Right.
Which I think is one of the things that's happened,
if you fast forward to today,
and I want to ask a couple more historical things, because it happens again and again, this constant narrative of lying, essentially, that people tend to believe.
I had an interview with Mark Zuckerberg where we were talking about Alex Jones, who is one of the greatest purveyors of misinformation, a totally heinous character in our modern history.
And I was like, why don't you take him off the platform?
He's clearly lying, and it's not true,
and these kids weren't child actors,
and they're actually dead,
and it's damaging to the parents,
it's damaging to everyone.
And for some reason,
he switched the conversation to Holocaust deniers
when we did this interview.
And I was like, oh, what a mistake on your part,
but fine, we'll go to Holocaust deniers. I was like, no, no, let's stay with Alex Jones. But he
did it and I went along and he started to talk about that Holocaust deniers don't mean to lie.
And I felt that was the definition of Holocaust. I was like, wow. And I let him spin it out,
which was really interesting to hear his point of view, which was completely ridiculous.
Because I felt that the more Holocaust deniers are on the platform, the more anti-Semitism down the river would happen.
Like you could see it happening if these people had access to other people because it's interesting stuff.
Yeah.
Right?
It's much more interesting than the facts.
And, you know, he got in a lot of trouble for saying that and et cetera, but it
didn't prevent him from continuing to allow them to thrive on the platform until later when he
decided to kick them off. He decided. One person was responsible, as far as I was concerned, for
this flood of anti-Semitism because he refused to do anything at the beginning. So talk a little
bit about, from a historical sense,
where those... Because right now it's in the hands of just a few people,
and they have real problems with history,
they have real problems with knowledge beyond coding.
Yeah, I think the main issue...
And for some reason they all have penises, but go ahead.
So they're afraid the witches will steal them well
no that's just elon nobody wants your penis elon
well um
so how do you go from there you talk about self-correcting you talk about the importance
of self-correcting mechanisms that's what you importance of self-correcting mechanisms. That's what you call it.
I think, again, the key point about all these issues
is not whether people are allowed to say these things.
Because here, I would tend to actually agree with Zuckerberg,
with Elon Musk, with others.
We should be very careful before we start simply silencing people.
You cannot say that at all well let me make
a point they do silence when they feel like it just so you're aware it's a lot of hypocrisy
i know but but i think again when it comes to really censoring that there is a there is a
discussion to be to be had right the problem is not with people saying all kinds of things. The problem is with the platforms
deliberately promoting particular kinds of things,
particular kinds of things that spread hate
and fear and anger and outrage.
At some point, Elon Musk said,
it's not freedom of speech, it's freedom of speech.
No, he didn't say it.
He copied it from someone, but go ahead.
It was said years before.
He was willing to repeat it.
Sure.
But the thing is that people produce
an enormous amount of content all the time.
Right.
And the question is, what gets attention?
What gets spread?
And here, the responsibility of somebody
who manages a big platform,
a media information system, whether it's a printing shop
or whether it's a social media outlet,
is to be very careful about what they choose to spread,
what they choose to promote.
And if they don't know how to tell the difference
between facts and lies, they are in the wrong business.
I mean, this is, for instance, what was built during the scientific revolution, is exactly
these mechanisms.
Right.
How do we tell the difference between facts and fiction?
So how do we build those today when this is in the hands of some very troubled people,
like with the full control of these things, or either uneducated or just they don't know what they're doing, actually.
It's some incompetence, some arrogance, some narcissism.
It's in the hands of a very small amount of people, and it's in the architecture of this stuff.
To me, it's the architecture, because when you look at, say, a Google,
it's designed for speed, context, and accuracy.
Like, I'm searching for ADL, I'm going to get ADL.
This is an example I always use.
That's a utility, right?
It comes to you.
What social networks are designed for is virality, speed, and engagement.
Engagement.
Which is a different architecture altogether
and it always leads the bad direction.
So how do you create, what should influence our thinking? Because the government refuses to step
in. We'll get to that in a minute. First of all, it should be clear that the platforms
should be liable for the actions of their algorithms, not for the content of their users.
Yep. And if somebody posts a hate-filled conspiracy online, that's the
responsibility of that human being. But then if the Facebook algorithm or the TikTok algorithm
decides to promote that conspiracy theory, to recommend it, to autoplay it, this is no longer
on the user. This is the action of the algorithm. And this is the main
problem. And I mean, it's amazing to think about it. But one of the first jobs to be automated in
the world was not taxi drivers and was not textile workers. It was editors, news editors. I mean,
this used to be one of the most powerful positions in society.
If you edit a major news platform, a newspaper,
you're one of the most important people
because you shape the public conversation.
And we saw throughout history how much power editors had.
Like during the French Revolution,
Jean-Paul Marat shaped the course of the French Revolution by
editing maybe the most important newspaper of Paris or France in those times, L'Ami de Peuple.
Lenin, before he was dictator of the Soviet Union, his one job basically was editor of a newspaper,
Iskra. Mussolini was editor of the newspaper Avanti. This is how he rose to fame and then
became dictator of Italy. So like the ladder of promotion was like journalist, editor, dictator.
And now the job... I haven't reached the top one yet, but go ahead. Because you're in the wrong
period, you're in the wrong era. I mean, now the job that was once
done by Lenin and Mussolini is done
by algorithms. It's one of the
first jobs in the world to be
completely automated.
And it's still,
of course, the humans are still in control.
They give the algorithms
the goal.
And as I guess most people here
know, the goal given was increased user engagement. And as I guess most people here know, the goal given
was increased user engagement.
And the algorithms by trial and error
discovered that the easiest
way to increase user engagement
is spread outrage.
And this is what they did.
And they should be liable for that.
Again, not for the content of their users.
We'll be back in a minute.
Support for On with Kara Swisher comes from Delete.me.
Your data is worth a lot
and data brokers know that.
They compile your information
like your name,
contact info,
address and more
and sell it.
That can be hard to prevent, but Delete.me wants to help. When you sign up for Delete.me, you provide
them with all the info you don't want online, and their experts take it from there. And it isn't just
a one-time service. They're always monitoring to keep your info out of the hands of data brokers.
I've tried Delete.me myself, and I happen to be very very privacy forward, and I do know a lot about this,
but I still was pretty shocked by how much information is out there about me and compiled in a way that could be really problematic.
I've gotten reports from Delete Me.
The dashboard makes it pretty easy to handle and let them know what I want taken off things.
Take control of your data and keep your private life private by signing up for Delete Me, now at a special discount for our listeners.
private by signing up for Delete Me, now at a special discount for our listeners. Today,
get 20% off your Delete Me plan when you go to joindeleteme.com slash Cara and use the promo code Cara at checkout. The only way to get 20% off is to go to joindeleteme.com slash Cara and
enter the code Cara at checkout. That's joindeleteme.com slash Cara, code Cara.
Join deleteme.com slash Kara, code Kara.
Support for On with Kara Swisher comes from Coda.
Do you know that terrible feeling when you have a million tabs and windows open just for one project?
Well, what if all those docs and spreadsheets could actually talk to each other and exist in one place?
Coda can make that happen. Coda brings the best of documents, spreadsheets, and apps into a
single platform, centralizing all of your team's processes and shared knowledge. With Coda, you can
stay aligned by managing your planning cycles in one location. You can set and measure objectives
and key results with full visibility across teams. It makes it easy for your teams to communicate and
collaborate on documents, roadmaps, and tables.
And you can make it work specifically for your team's needs when you access hundreds of templates and get inspired by others at Coda's gallery.
Coda empowers your startup to strategize, plan, and track goals effectively.
Take advantage of this limited-time offer just for startups.
Go to coda.io slash cara today and get six free months of the team plan.
That's coda.io slash Cara to get started for free and get six free months of the team plan.
coda.io slash Cara.
Support for On with Cara Swisher comes from Anthropic.
You can probably benefit from incorporating AI into your daily workflow. It doesn't matter if you're an entrepreneur trying to get a new idea off the
ground or the CFO of a multinational corporation. AI is a powerful tool worth harnessing. But exactly
how do you get started? Claude from Anthropic may be the answer. Claude is a next generation
AI assistant built to help you work more efficiently without sacrificing safety or reliability. Anthropic's latest model, Claude 3.5 Sonnet, can help you organize thoughts,
solve tricky problems, analyze data, and more, whether you're brainstorming alone or working
on a team with thousands of people, all at a price that works for just about any use case.
If you're trying to crack a problem involving advanced reasoning, need to distill the essence of complex images or graphs, or generate heaps of secure code, Clawed is a great
way to save time and money. Plus, the Anthropic leadership team was founded in AI research and
built Clawed with an emphasis on safety. To learn more, visit anthropic.com slash Clawed.
That's anthropic.com slash Clawed.
That's entropic.com.
Talk a little about what happened, you know, in the most famous version of propaganda, which is still probably Nazi Germany in terms of using the media, using imagery, using propaganda.
Because I think we use the word misinformation or disinformation when we're really talking about propaganda in the end.
But it's all part of the same kind of toolkit
because a key difference between dictatorships and democracies
is that democracies work on trust and especially trust in institutions,
whereas dictatorships work on terror.
So dictators, they don't really need you to believe in a particular theory, in a particular
version of reality. They really need you to disbelieve everything. If they can destroy all
trust between people, then the only system that can still function is a dictatorship.
So, and we see it also today, and we saw it, of course, in Nazi Germany, in the Soviet Union.
And we see it also today, and we saw it, of course, in Nazi Germany, in the Soviet Union.
I mean, there is, like, with one hand, they try to spread a particular conspiracy theory, a particular propaganda.
But with the other hand, they just try to sow distrust.
And unfortunately, both historically and also today, this is not the preserve, say, of Nazis or fascists of the extreme right.
We see it on the right and on the left as well.
It was also something very common on the Marxist left.
And the key idea, maybe the most important idea,
which is common here,
is the idea that the only reality is power and that all human relations
are power struggles. This is an idea which is common on both the extreme right and the extreme
left, which is corrosive of trust. It implies that all human institutions, whether it's the media,
whether it's science, whether it's the courts, they are just elite conspiracies to gain power.
Whenever somebody tells you something, the question to ask is not, is it the truth?
The question to ask is, whose privileges does it serve?
It's obviously not the truth because nobody cares about the truth.
All humans care only about power.
So every time somebody says something,
the only question is,
who is gaining in power by this?
Speaking of that and the hammer of witches,
for example,
the New York Times fact-checked over 170 posts
recently from Elon Musk,
who owns a big communications platform now,
along with the other stuff he should still be doing
and not this,
on X in a week. And he found that almost one-third were false, misleading, or missing context.
This summer, the Supreme Court sidestepped whether social media should or should not,
away from Musk, should or should not moderate content. Public opinion is split because of the,
at least in this country, because of the First Amendment, and they always get mushed in there
in a way that's very deceptive. I think conservatives think they're doing too much, progressives think too little,
and proponents of less content moderation often, as I said, cite the First Amendment,
but I don't think it's about free speech. It's about, as you said, algorithms. I'm going to
read a bit from your book. You write, the information network has become so complicated,
it relies to such an extent on opaque algorithmic decisions
in intercomputer entities
that it has become very difficult for humans to answer
even the most basic of political questions.
Why are we fighting each other?
So talk about the impact of these unfathomable algorithms
that are owned by individual people
who direct them in some fashion or control them.
There is a big question here whether the
danger is the individuals who own them or whether the danger potentially is the algorithms themselves
and i agree that at the present moment the people who own the technology probably pose the bigger danger.
But in the long term, we have to take into account
that this is the first technology in history
which can potentially make decisions
and invent new ideas by itself.
Right.
That AIs are agents and not tools.
And therefore, down the road,
the main problem could be the AIs
and not their human creators or their human owners.
Right, so that it becomes self, what it does, what it wants to do, correct?
Yeah, like in the case of the social media algorithms, I mean, today they should know
better, but at least in the beginning, the executives who gave the algorithms
the task of increased user engagement, they did not anticipate what will happen. And I think that
most of them definitely didn't want what happened. That the corrosion of trust and the collapse of
the public conversation and the destabilization of democracy. This was
not their plan. Well, they certainly should have anticipated. It wasn't very hard to figure that
out, correct? I mean, when you, I was in a meeting with Facebook when they showed Facebook Live and
I said, oh, what happens when people murder each other on this thing? What happens when someone
puts a GoPro on their head and does a mass murder? What happens when there's bullying, suicide?
And literally the person there was like, you're a bummer, Kara Swisher.
I was like, yeah, I am.
You know, like humanity has become sometimes a bummer.
So how do you then self-correct that?
Because the thrust of your book is about AI.
And you say we're at a critical moment of canonization,
that AIs will become full-fledged members of our information network.
Let me read this section because you're talking about the people are the problem now,
but the algorithm itself will be, the AI algorithm could become the problem down the road,
which makes you scared because they're doing a bad job. Humans are doing a bad job right now.
In coming years, all information networks from armies to religions will gain millions of new AI members, which will process data very differently than humans do.
The new members will make alien decisions and generate alien ideas, that is, decisions and ideas that are unlikely to occur to humans, which is a plus for AI for a lot of people.
The addition of so many alien members is bound to change the shape of armies, religions, markets, and nations.
Entire political, economic, and social systems might collapse and new ones will take their place.
That's why AI should be a matter of utmost urgency, even to people who don't care about technology
and who think the most important political questions concern the survival of democracy or their fair distribution of wealth.
That sounds terrifying because people are doing a terrifically bad job right now.
So talk about you were not an optimist when it comes to AI. You know, you're saying Mark Zuckerberg
and Elon Musk are one thing, but boy, where did you get a load of this AI kind of thing?
I mean, first of all, it's obvious that AI also has enormous positive potential. Otherwise,
we would not be developing it. Right. Well, there is the money, but go ahead.
No, but even the money, I mean, you need to sell something.
I mean, it does.
I mean, in many cases, people buy it because there is benefit in it.
Otherwise, it's difficult to sell it.
But the key thing is that human societies are extremely adaptable
if we give them enough time.
And this is just moving too fast.
We don't have time to adapt, again, to these completely new financial systems,
armies, religions, which will be created by all these new AI agents.
You call them alien and not artificial intelligence.
Why is that?
They are alien, not in the sense of coming from outer space, of course.
Although I can't wait till they get here and fix everything, but go ahead.
Right about now, guys, get down here.
It's because, you know, artificial intelligence conveys, I think, again, a complacent and
wrong idea that these things are artifacts.
And people, when they think of artifacts, they think we create them so we can control them,
we can anticipate them. It's only AI if it can learn and change by itself, which means that it's
going to be very difficult to anticipate and to control what it's going to do.
Even though it's fueled by our own information
that we've been uploading the past two decades, correct?
Yes. I mean, it works.
I mean, it eats all the information that humans have produced,
not just in the last two decades,
in the last tens of thousands of years.
If you think about AI that produce images,
so we feed them with all the images created
since the cave art of 40, 40 and 50,000 years ago.
And they eat the whole of it within just a few months
and then start creating new things.
The first kind of generation,
it's mostly similar to what humans have produced so far.
But I guess that very quickly,
they will start to become more and more
creative, which also means more and more alien, more and more different from human creations.
Now, we have been used to living, you know, throughout history inside human culture. Like
everything, all the images, all the poetry, all the financial systems, all the armies,
they all came ultimately from the human imagination, from our mind.
And suddenly, very quickly, we will find ourselves living inside a culture which is more and more an alien culture,
coming out of the calculations and confabulations and imaginations
of a non-organic entity.
Which was originally fed by us.
Which was originally fed by us the same way that we originally,
we eat the plants and we eat animals and still we can create things that plants and animals can't.
Right.
Tech people are using a term agent, co-pilot, friend, assistant,
like they're lesser than, essentially,
than humans, and we are completely in
charge. I just recently spoke with Microsoft
AI CEO Mustafa Suleiman,
and they have a new word
at Microsoft called agentic,
which is agent. Yes, for agent.
It's not a word. I don't care.
It's a new word. Okay, fine,
but it's not.
You know, and you're ringing the bell.
Talk about what you think between the thinking of AI as a tool or agent,
because that's what they're leaning in on.
This is here.
We're here to help you.
When you talk about this idea of agentic,
you're talking about something very different.
It's going to be your teacher.
An agent is an entity that can make decisions by itself that you can't anticipate
and that can invent new ideas by itself that you can't anticipate and that can invent new ideas
by itself that you can't anticipate right that can learn and change and you know you scale up
and i'll give an example from what's happening in my home country now in israel
so um there is a huge debate i'm not sure who is correct there, about the role of AI in choosing bombing targets in Gaza.
You know, in the Terminator,
you had the robots pulling the trigger.
Right.
But what's really happening in wars today,
it's the humans pulling the trigger,
but the AIs are calling the shots,
literally calling the shots.
Right.
I mean, everybody that I talked with agree
that at present the AI has the capability of choosing targets and that it is being used to go over immense amounts of data that no human analyst is able to analyze and choose targets.
So let me read you what Suleiman said.
This is what he said to me.
It's going to be your teacher, your medical advisor, your support network.
Ultimately, it's going to take actions on your behalf. This is the friendly version of
that. You're talking about a very unfriendly version. Yeah, the target selector. And this
is still extremely primitive AI. I mean, this kind of bombing AI and chat GPT and GPT-4,
this is like the amoebas of the AI world. You're right. The easiest way for AI to seize power
is not by breaking out the Dr. Frankenstein's lab,
but by ingratiating itself with some paranoid Tiberius.
Dictator, yeah.
I can think of one or two these days.
What risks do you see?
Talk about that,
because you're talking about ingratiating and not terminating.
We tend to talk about AIs in the context of democracies,
but dictatorships also have a huge problem with AIs.
For dictators, for human dictators,
the scariest thing in the world is not a democratic revolution.
The scariest thing is a powerful subordinate
that takes power from them,
either assassinates them or manipulates them.
If you think about the Roman Empire, for instance,
this is the reference to Tiberius,
not a single Roman emperor was ever toppled
by a democratic revolution,
but a very large percentage of them
were either killed or toppled
or manipulated by powerful subordinates,
a general, a provincial governor, their wife, their son, somebody.
And this is still the number one problem for dictators around the world.
And for them to give a lot of power to an AI
that can then get out of their control is very, very tricky.
Because, you know, to seize power in a democracy, it's difficult.
Because power is distributed between many organizations, institutions.
How would the AI deal with a Senate filibuster?
It's difficult.
But to seize power in a dictatorship, you just need to learn how to manipulate a single,
very paranoid individual. And paranoid people are usually the easiest to manipulate.
So if you think... Again, I know a few. Yeah. So it's not just an issue for democracies.
So it's not good for dictatorships, but democracy also creates chaos.
One of the selling points of the Internet and social media was that it brought the world closer together.
That was at the beginning, just so you're aware.
Yes, we're all going to get along.
If I had a dollar for every time Mark Zuckerberg said community to me, I'd be very wealthy.
But somehow it didn't turn out that way.
You say right now diverging views on social media are actually leading to more separation,
it'll get worse with AI.
Talk about this, you called it the silicon curtain
and data colonialism.
So this is about what might happen
on the international level,
not within countries, but between countries.
So the two main scenarios
is that you'll have the world splitting into completely different
information spheres after centuries of convergence and globalization you will have completely you
know if the main metaphor in the early days of the web was the web that connects everything
and community and la la la so now the main metaphor is the cocoon like the web is closing
in on us and we are enclosed within information cocoons and different people are inside different
cocoons right and the silicon curtain is of course a reference to the iron curtain and to the idea
that we will have entire countries or entire spheres which are in completely separate information.
Which we have in this country right now.
Which is developing at a very fast rate.
And then a complete breakdown in communication
and understanding between human beings.
That you cannot agree on,
there is no longer a shared human reality and the other main danger is of the rise
of new digital empires and data colonialism the same thing that happened in the 19th century
with the industrial revolution the few countries that industrialized first they had then the power to conquer and dominate and exploit the rest of the world
this can happen again with ai technology but it doesn't have to be countries it can be people
right it can be people inside the country but i mean i think the biggest worry is still on the
international level you know some countries will become fabulously wealthy because of AI and the US and China are the front runners here.
But other countries could completely collapse.
Their economy could completely collapse and they will not have the resources to retrain the workforce,
to rebuild, to adapt to the new AI economy. So we can have a repeat of the imperialist drive
of the 19th century, and this time centered around data. To control a country from afar,
you will no longer need to send in the soldiers and gunboats and machine guns.
You will basically just need to take out the data.
If you have all the data of a country,
you have all the data of every journalist,
every politician, every military officer,
and you also control the attention
of the people in that country.
You control what they see, what they hear.
You don't need to send in the soldier.
This has become a data colony.
We'll be back in a minute.
Fox Creative.
This is advertiser content from Zelle.
When you picture an online scammer, what do you see?
For the longest time, we have these images of somebody sitting crouched over their computer
with a hoodie on, just kind of typing away in the middle of the night.
And honestly, that's not what it is anymore.
That's Ian Mitchell, a banker turned fraud fighter.
These days, online scams look more like crime syndicates than individual
con artists, and they're making bank. Last year, scammers made off with more than $10 billion.
It's mind-blowing to see the kind of infrastructure that's been built to facilitate
scamming at scale. There are hundreds, if not thousands, of scam centers all around the world.
These are very savvy business people.
These are organized criminal rings.
And so once we understand the magnitude of this problem, we can protect people better.
One challenge that fraud fighters like Ian face is that scam victims sometimes feel too ashamed to discuss what happened to them.
But Ian says one of our best defenses is simple.
We need to talk to each other.
We need to have those awkward conversations around
what do you do if you have text messages you don't recognize?
What do you do if you start getting asked to send information that's more sensitive?
Even my own father fell victim to a, thank goodness,
a smaller dollar scam, but he fell victim.
And we have these conversations all the time.
So we are all at risk. And we all need to work together to protect each other.
Learn more about how to protect yourself at vox.com slash Zelle. And when using digital
payment platforms, remember to only send money to people you know and trust.
Support for this podcast comes from Anthropic.
You already know that AI is transforming the world around us,
but lost in all the enthusiasm and excitement is a really important question.
How can AI actually work for you?
And where should you even start?
Claude from Anthropic may be the answer.
Claude is a next generation AI, built to help you work more efficiently
without sacrificing safety or reliability.
Anthropic's latest model, Claude 3.5 Sonnet,
can help you organize thoughts,
solve tricky problems, analyze data, and more.
Whether you're brainstorming alone
or working on a team with thousands of people,
all at a price that works for just about any use case.
If you're trying to crack a problem
involving advanced reasoning, need to distill the essence of complex images or graphs, or generate
heaps of secure code, Clawed is a great way to save time and money. Plus, you can rest assured
knowing that Anthropic built Clawed with an emphasis on safety. The leadership team founded
the company with a commitment to an
ethical approach that puts humanity first. To learn more, visit anthropic.com slash Claude.
That's anthropic.com slash Claude.
Do you feel like your leads never lead anywhere and you're making content that no one sees
and it takes forever to build a campaign
well that's why we built HubSpot it's an AI powered customer platform that builds campaigns
for you tells you which leads are worth knowing and makes writing blogs creating videos and posting
on social a breeze so now it's easier than ever to be a marketer. Get started at HubSpot.com slash marketers.
So before we go, I want to talk about regulation.
You've been advocating for AI regulation for a while, so have I.
Last year, you signed the open letter causing for a pause,
which very nice, but it didn't happen, just FYI.
Nobody really expected it. Lots of people in Silicon Valley signed it
and then started their own thing.
I think of one or two people.
You didn't.
Here in California, Gavin Newsom recently vetoed a bill that would have been the first legislation in the nation to put up AI guardrails.
It was a problematic bill.
It was somewhat unspecific, but it required safety testing.
It mandated a kill switch, turn off AI systems.
And many who criticized it said it was focused too much on frontier models.
And then Stanford professor Fei-Fei Li, who was one of the early AI people, also warned that it
would disadvantage smaller developers. And they're trying to rewrite it, but there's almost no
regulation anywhere, and there's no global regulation.
There's some of banning bots, deep fakes people are talking about.
What do you see as the regulatory scheme work here to do this?
Because we haven't been able to do the last one.
Yeah.
Not one.
So I think there are three things I can say.
I mean, there are two regulations which should be obvious.
One is that corporations should be liable
for the actions of their algorithms.
You developed it, you deployed it, you're liable.
Like with cars, like with medicines, like with food.
I agree.
The other is that AIs, you should then counterfeit humans.
AIs and bots are welcome to interact with us
only if they identify as AIs.
So transparency.
Yes.
But, I mean, it's going to be impossible
to just regulate AI in advance
because, again, it's an agent,
because it learns and changes very, very rapidly.
The only thing that can work
is to create living institutions,
not kind of dead regulations that are written in books,
but living institutions staffed by at least some of the best talent on the planet and with enough
budget to understand what is happening and to react on the fly. And again, before we even get
to regulation, the first step is observation. We need before teeth like most people even in this country are
hardly understand what is happening and why is it dangerous if you go to the rest of the world to
the countries which might end up as data colonies they are even in a worse situation right because
they have to rely on what the american or the Chinese companies or governments tell them. So what we need urgently
is an international institution,
again, with the best talent and with enough budget,
to simply tell people what is happening
and what are the dangers.
And now we can discuss what regulations...
Almost like a nuclear regulatory commission
is kind of what you're talking about, right?
But again, with nukes, it was relatively easy to understand the danger.
Right.
With AI, it's a very rapidly developing technology.
Because there's positives to it.
The Surgeon General talked about this.
There's never been technology that's both positive and possibly devastating to humanity.
So my last two questions, you say also that unsupervised algorithms should be banned from curating public debates.
Yeah, absolutely.
People do such a good job at it already.
Still, I would trust the humans more than the...
We have thousands of years of experience with humans.
We don't have experience with AIs.
I mean, in the tech world, I often hear the opposite argument.
We have thousands of years of experience with humans.
We know we can't trust them. The AIs, know so maybe they'll be good yeah and that's a a huge huge gamble yeah yeah so i want to finish on this you said the um the survival of democracies depends
on their ability to regulate the information market which we have not done at all like at all
yeah not one i was just recently on a CNN debate,
and this guy who was a Trump spokesperson,
spokesmodel, really, because he didn't know anything.
He said, I said,
how many laws are regulating the internet and free speech?
Because it was going on about free speech.
And he goes, hundreds.
And I said, zero.
But you're close, you know?
So you know the decisions we make in the coming years
will determine whether summoning this alien intelligence
proves to be a terminal error
or the beginning of a hopeful new chapter
of the evolution of life.
You have it both ways there, I see.
So before we go, what's your worst case scenario
and what's your best case scenario?
You know, the worst case scenario
is that AI destroys not just human civilization,
but, you know, the very light of consciousness.
That AIs are highly intelligent,
but they are non-conscious entities, at least so far.
Right.
And we could have a scenario that they take over
these non-organic
highly intelligent entities
take over the planet
even spread from this planet
to the rest of the galaxy
but it's a completely dark universe
that consciousness
I mean the ability to feel things
like love and hate
and pain and pleasure
this could be completely gone
so this is like it's not a very high probability but it's there is some so we'll call that the
thanos version but go ahead yes yeah okay and that's the worst case scenario okay what's the
best one the best case scenario is that we are able to harness the immense positive potential of AI, you know, in healthcare,
in solving climate change, in education. And the key is knowing which problems to solve.
One of the key problems of humanity throughout history, and you see it in particular in Silicon Valley, is that we rush to solve problems. We do a tremendous job and then we realize we solved
the wrong problem because we
did not spend enough time on just understanding what the problem is and you know we now have this
fabulous technology to say unimportant things to people we don't care about and we still don't have
we still don't know how to say the most important things to the people we most love, we most care about. So we, I mean,
it happens again and again in history. Right. And we are so wise when it comes to, you know,
finding the technical solutions, but we just don't spend enough time on choosing the right
problems to solve. Right. I have one more question. So with Sapiens, you became like every tech bro
was a fan boy of you like crazy.
I know.
I was like,
have you talked to Yuval?
Do you know what Yuval said?
I'm like,
no,
I took an anthropology course in college,
unlike you.
So I did know a lot of this stuff.
Um,
uh,
so,
but they were thrilled and you really did find a way to get them relatively
educated,
not very,
um,
about some stuff.
So they loved you,
loved you,
loved you.
Like couldn't stop talking about,
you know that, right?
They're your biggest fanboys.
This is critical.
Yeah.
What are they saying about,
they don't like me anymore.
They never did.
But what are they...
It's just out.
We'll wait and see.
What do you think?
I think they will like it less than Sapiens.
But again, I think maybe I'm less critical
than you are of them
but I think many of them
you haven't met them yet
but go ahead
I've met some of them
and many of them
are really deeply concerned about it
they are
because they understand
I mean they understand
almost better than anybody else
what they are creating
and they are very concerned
they don't know how to stop
again they have this basic argument that we would most of them not all of them what they are creating. And they are very concerned. They don't know how to stop.
Again, they have this basic argument that we would, most of them, not all of them,
we would like to slow down.
We realize it's dangerous.
We would like to give human society more time,
more time to think about it,
more time to develop the safety mechanisms,
but we can't stop.
We can't slow down.
They will say, we are the good guys.
If we slow down, the bad guys
in the other company or in the other country will not slow down. And then the bad guys will win the
race and take over the world. So we must do it first. Right. But that's sort of the she or me
argument that Mark Zuckerberg is often made. Like, it's either she or us. And I'm like,
is that my choice? Is there another? Is there a choice for me? I think there are more
choices, but I also think this is a
serious argument. And again, I don't
think they are kind of Hollywood science fiction
villains, just Dr. Evil out
to take over the world.
So, I
think that they would
appreciate, to some
extent, a deep and meaningful
conversation about it. They have their opinions,
but I wouldn't write them off so quickly and so easily.
All right. On that note, Yuval Harari, it's a great book.
Thank you.
Thank you.
On with Kara Swisher is produced by Christian Castro,
Roussel,
Kateri Yoakum,
Jolie Myers,
and Megan Burney.
Special thanks to Corinne Ruff,
Kate Gallagher,
Kaylin Lynch,
and Claire Hyman.
Our engineers are Rick Kwan,
Fernando Arruda,
and Aaliyah Jackson.
And our theme music is by track academics.
If you're already following the show,
you get to climb the witch's tree and choose the biggest penis from the nest.
If not, just wait
until I've been promoted from editor-at-large at New York Magazine to dictator of all she surveys.
Go wherever you listen to podcasts, search for On with Kara Swisher and hit follow.
Thanks for listening to On with Kara Swisher from New York Magazine,
the Vox Media Podcast Network, and us. We'll be back on Thursday a breeze.
So now, it's easier than ever to be a marketer.
Get started at HubSpot.com slash marketers.
Support for this podcast comes from Stripe.
Stripe is a payments and billing platform supporting millions of businesses around the world, including companies like Uber, BMW, and DoorDash. Stripe has helped countless
startups and established companies alike reach their growth targets, make progress on their
missions, and reach more customers globally. The platform offers a suite of specialized features
and tools to fast-track growth, like Stripe Billing, which makes it easy to handle subscription-based
charges, invoicing, and all recurring revenue management needs. You can, like Stripe Billing, which makes it easy to handle subscription-based charges,
invoicing, and all recurring revenue management needs.
You can learn how Stripe helps companies of all sizes make progress at Stripe.com.
That's Stripe.com to learn more.
Stripe. Make progress.